Skip to content

Latest commit

 

History

History
406 lines (341 loc) · 19.6 KB

README.md

File metadata and controls

406 lines (341 loc) · 19.6 KB

Pythagora Logo

Developers spend 20-30% of their time writing tests!

✊ Pythagora creates automated tests for you by analysing server activity ✊


🌟 As an open source tool, it would mean the world to us if you starred Pythagora repo 🌟
🙏 Thank you 🙏


⚙️ Installation

To integrate Pythagora into your Node.js app, you just need to install the pythagora package

npm install pythagora

And that's it - no config or setup! You are ready to start recording your integration tests!

🎥 Capturing tests

Pythagora records all requests to endpoints of your app with the response and everything that's happening during the request. Currently, that means all Mongo and Redis queries with responses (in the future 3rd party API requests, disk IO operations, etc.). Then, when you run the tests, Pythagora can simulate the server conditions from the time when the request was captured.

  1. From the root directory run Pythagora in a capture mode first to capture test data and mocks.
    npx pythagora --init-command "my start command" --mode capture
    Eg. if you start your Node.js app with nest start then the command would be:
    npx pythagora --init-command "nest start" --mode capture
  2. Click around your application or make requests to your API. Pythagora will capture all requests and responses.



    NOTES:
    • to stop the capture, you can exit the process like you usually do (Eg. Ctrl + C)
    • on Windows make sure to run all commands using Git Bash and not Power Shell or anything similiar


▶️ Running tests

When running tests, it doesn’t matter what database is your Node.js connected to or what is the state of that database. Actually, that database is never touched or used —> instead, Pythagora creates a special, ephemeral pythagoraDb database, which it uses to restore the data before each test is executed, which was present at the time when the test was recorded. Because of this, tests can be run on any machine or environment.

If a test does an update to the database, Pythagora also checks the database to see if it was updated correctly.

So, after you captured all requests you want, you just need to change the mode parameter to --mode test in the Pythagora command.

npx pythagora --init-command "my start command" --mode test



OpenAI logo

🤖 ️Generate Jest tests with Pythagora and GPT-4

You can export any Pythagora test to Jest with GPT-4. To see how it works, you can watch the full demo video here.

What are Jest integration tests made of

  • Database setup (before a test is run)
    • during the export to Jest, Pythagora saves all database documents in the pythagora_tests/exported_tests/data folder as a JSON file
    • in the beforeEach function, these documents are restored into the database so that the database is in the same state as it was when the test was recorded
    • Pythagora has built-in functions to work with the database but in case you want to use your own and completely separate Jest tests from Pythagora, use the global-setup.js file in which you can set up your own ways to populate the database, get a collection and clear the database
  • User authentication (when the endpoint requires authentication)
    • the first time you run the export, Pythagora will create auth.js file
    • it is used inside beforeEach function to retrieve the authentication token so that API requests (that require authentication) can be executed
  • Test
    • tests check the response from the API and if the database is updated correctly

How to export Pythagora tests to Jest

  1. First, you need to tell Pythagora what is the login endpoint. You can do that by running:

    npx pythagora --export-setup
  2. After that, just run Pythagora capture command and log into the app so the login route gets captured.

    npx pythagora --init-command "my start command" --mode capture
  3. Exporting to Jest is done with GPT-4 so you either need to have OpenAI API key with GPT-4 access or a Pythagora API key which you can get here. Once you have the API key, you're ready to export tests to Jest by running:

    npx pythagora --export --test-id <TEST_ID> --openai-api-key <YOUR_OPENAI_API_KEY>

    or

    npx pythagora --export --test-id <TEST_ID> --pythagora-api-key <YOUR_PYTHAGORA_API_KEY>
  4. To run the exported tests, run:

    npx pythagora --mode jest

Exported tests will be available in the pythagora_tests/exported_tests folder.

NOTE: Pythagora uses GPT-4 8k model so some tests that do too many things during the processing might exceed the 8k token limit. To check which tests you can export to Jest, you can run:

npx pythagora --tests-eligible-for-export


🎞 Demo

Here are some demo videos that can help you get started.

Pythagora Alpha Demo

🎞️ ▶️ Video resources ▶️ 🎞️

Pythagora Demo (4 min)
Generate Jest tests with Pythagora and GPT-4 (4 min)
Pythagora Tech Deep Dive (16 min)
Dev Workflow With Pythagora (4 min)



🔎 Examples

Here are examples of open sourced repositories which we forked and created tests with Pythagora so you can easily see it in action.

MERN E-commerce Reddish Trellis



🔧 Maintenance / update of tests

Sometimes tests failing is expected behaviour if the code behaviour is updated. In those cases, tests need to be updated. Pythagora provides a git like interface where you can review all changes that are breaking the test and easily (A)ccept them if they are expected or (D)elete the test if you think it's invalid. To start the review process, just run the Pythagora command with --review flag.

npx pythagora --review

You can watch the workflow with Pythagora video in which I go deeper into details of the review process.

❌ Deleting tests

If you made some bigger changes to the repo and you want to rewrite many tests, you can delete all of them with --delete-all-failed flag.

npx pythagora --delete-all-failed

If you want to delete only one test using testId you can use --delete testId like this:

npx pythagora --delete testId

📖 Other Options

These are available options for Pythagora command:


--rerun-all-failed (runs again only tests that failed in previous run)

npx pythagora --init-command "my start command" --mode test --rerun-all-failed

--test-id (runs test by Id)

npx pythagora --init-command "my start command" --mode test --test-id testId

--pick endpoint1 endpoint2 (starts capturing only listed endpoints)

npx pythagora --init-command "my start command" --mode capture --pick /endpoint1 /endpoint2

--ignore endpoint1 endpoint2 (starts capturing but ignores all listed endpoints)

npx pythagora --init-command "my start command" --mode capture --ignore /endpoint1 /endpoint2



📝 Code Coverage Report

Code coverage is a great metric while building automated tests as it shows us which lines of code are covered by the tests. Pythagora uses nyc to generate a report about code that was covered with Pythagora tests. By default, Pythagora will show you the basic code coverage report summary when you run tests.

If you want to generate a more detailed report, you can do so by running Pythagora with --full-code-coverage-report flag. Eg.

npx pythagora --init-command "my start command" --mode test --full-code-coverage-report

You can find the code coverage report inside pythagora_tests folder in the root of your repository. You can open the HTML view of the report by opening pythagora_tests/code_coverage_report/lcov-report/index.html.


In case you don't want the code coverage to be shown at all while running tests, you can run the tests with --no-code-coverage parameter. This is helpful during debugging process since the code coverage report can clash with your IDE's debugger.



🔑 Authentication

For authentication we support JWT, sessions stored in Redis and sessions stored in MongoDB. First 2 cases cases (JWT and sessions stored in Redis) should work just fine without any additional implementation but for session that are stored in MongoDB you need to add this one line of code:
if (global.Pythagora) global.Pythagora.authenticationMiddleware = true;

just before your authentication middleware. For example, if you are using express-session you would have to add our line of code just above your middleware that is managing sessions in your DB, like this:

if (global.Pythagora) global.Pythagora.authenticationMiddleware = true;

app.use(session({
    secret: 'my-secret',
    resave: false,
    saveUninitialized: false,
    cookie: {
        maxAge: 60 * 60 * 1000
    },
    store: MongoStore.create({
        mongoUrl: mongourl,
        mongoOptions: {
            useNewUrlParser: true,
            useUnifiedTopology: true
        }
    })
}));

That's it! You are ready to go and all your API requests with authentication should PASS!



🗺️️ Where can I see the tests?

Each captured test is saved in "pythagora_tests" directory at the root of your repository.

Click here to see "pythagora_tests" folder structure explanation:
  • pythagora_tests
    • exported_tests // folder containing all exported Jest tests
      • data // folder containing Jest test data
        • JestTest1.json // this is data that is populated in DB for JestTest1.test.js
        • JestTest2.json // this is data that is populated in DB for JestTest2.test.js
        • ...
      • auth.js // here is authentication function that is used in all Jest tests
      • global-setup.js // Jest global setup if you want to use your own functions for running Jest tests
      • JestTest1.test.js // this is an exported Jest test
      • JestTest2.test.js
      • ...
    • pythagoraTest1.json // this is a Pythagora test
    • pythagoraTest2.json
    • ...


Each JSON file in this repository represents one endpoint that was captured and each endpoint can have many captured tests. If you open these files, you will see an array in which each object represents a single test. All data that's needed to run a test is stored in this object. Here is an example of a test object.

Click here to see example of one recorded Pythagora test:
{
   "id": "b47cbee2-4a47-4b2c-80a0-feddae3081b3",
   "endpoint": "/api/boards/", // endpoint that was called
   "body": {}, // body payload that was sent with the request
   "query": {}, // query params that were sent with the request
   "params": {}, // params that were sent with the request
   "method": "GET", // HTTP method that was used
   "headers": { // headers that were sent with the request
      "x-forwarded-host": "localhost:3000",
      ...
   },
   "statusCode": 200, // status code that was returned
   "responseData": "...", // response data that was received
   "intermediateData": [ // server activity that was captured during the request
      {
         "type": "mongodb", // type of the activity - mongo query in this case
         "op": "findOneAndUpdate",
         "db": "ecomm",
         "collection": "users",
         "query": { // mongo match query that was executed
            "_id": "ObjectId(\"63f5e8272c78361761e9fcf1\")"
         },
         "otherArgs": {
            "update": { // data that needs to be updated
               "$set": {
                  "name": "Steve",
                  ...
               }
            },
            ...
         },
         "options": {
            "upsert": false,
            ...
         },
         "preQueryRes": [ // data that was present in the database before the query was executed
            {
               "_id": "ObjectId(\"63f5e8272c78361761e9fcf1\")",
               "name": "Michael",
               ...
            }
         ],
         "mongoRes": [ // data that was returned by the query
            {
               "_id": "ObjectId(\"63f5e8272c78361761e9fcf1\")",
               "name": "Steve",
               ...
            }
         ],
         "postQueryRes": [ // data that was present in the database after the query was executed
            {
               "_id": "ObjectId(\"63f5e8272c78361761e9fcf1\")",
               "name": "Steve",
               ...
            }
         ]
      }
   ],
   "createdAt": "2023-02-22T14:57:52.362Z" // date when the test was captured
}



🤔️ FAQ

  • What happens when I make intential change that breaks tests. How can I update Pythagora tests?

    • Pythagora tests can easily be updated by running the review command (npx pythagora --review). The review process is basically the same as a git review where you'll find each difference between the captured test and the failed one so you can choose if you need to debug this or you want to accept the new change. If you click a(as "accept"), the test will automatically update.
  • Automated tests should show me where the bug is - how can I find a bug with Pythagora tests?

    • When a test fails, you can easily rerun the test that failed by adding --test-id <TEST_ID> to the test command. This way, if you add breakpoints across your code, you'll be able to easily debug the test itself with all the data the test is using. Also, we have plans for adding bug tracking features but at the moment we don't know when will it be ready.



⛑️ Support

For now, we support projects that use:

Other technologies that Pythagora works with:

Apollo server GraphQL NestJS Next.js Nuxt.js PostgreSQL
Logo 1 Logo 2 Logo 3 Logo 1 Logo 1 Logo 1
Upcoming Upcoming Upcoming

🏁 Alpha version

This is an alpha version of Pythagora. To get an update about the beta release or to give a suggestion on tech (framework / database) you want Pythagora to support you can 👉 add your email / comment here 👈 .


🔗 Connect with us

💬 Join the discussion on our Discord server.

📨 Get updates on new fetures and beta release by adding your email here.

⭐ Star this repo to show support.