{"payload":{"feedbackUrl":"https://github.com/orgs/community/discussions/53140","repo":{"id":756620350,"defaultBranch":"main","name":"openhouse","ownerLogin":"linkedin","currentUserCanPush":false,"isFork":false,"isEmpty":false,"createdAt":"2024-02-13T00:52:30.000Z","ownerAvatar":"https://avatars.githubusercontent.com/u/357098?v=4","public":true,"private":false,"isOrgOwned":true},"refInfo":{"name":"","listCacheKey":"v0:1717112972.0","currentOid":""},"activityList":{"items":[{"before":"6390661e8f66447e6b7d3020357a5a9b5201ea58","after":"226b378bd449e3ff8d537224eb0b816398cd4660","ref":"refs/heads/main","pushedAt":"2024-05-30T23:32:49.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"jiang95-dev","name":"Levi Jiang","path":"/jiang95-dev","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/19853208?s=80&v=4"},"commit":{"message":"JobScheduler switch using getAllTables to searchTables api (#95)","shortMessageHtmlLink":"JobScheduler switch using getAllTables to searchTables api (#95)"}},{"before":"d0c6583c7dd5b73c4140ef74d5672a71592ee9c9","after":"6390661e8f66447e6b7d3020357a5a9b5201ea58","ref":"refs/heads/main","pushedAt":"2024-05-30T20:09:18.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"jainlavina","name":"Lavina Jain","path":"/jainlavina","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/114708561?s=80&v=4"},"commit":{"message":"[PR1/5] Add S3 Storage type and S3StorageClient (#107)\n\n## Summary\r\nOpenhouse catalog currently supports HDFS as the storage backend. The\r\nend goal of this effort is to add the integration with S3 so that the\r\nstorage backend can be configured to be S3 vs HDFS based on storage\r\ntype.\r\nThe entire work will be done via a series of PRs:\r\n1. Add S3 Storage type and S3StorageClient.\r\n2. Add base class for StorageClient and move common logic like\r\nvalidation of properties there to avoid code duplication.\r\n3. Add S3Storage implementation that uses S3StorageClient.\r\n4. Add support for using S3FileIO for S3 storage type.\r\n5. Add a recipe for end-to-end testing in docker.\r\n\r\nThis PR addresses 1 by adding S3 storage type and S3StorageClient.\r\n\r\n## Changes\r\n\r\n- [ ] Client-facing API Changes\r\n- [ ] Internal API Changes\r\n- [ ] Bug Fixes\r\n- [x] New Features\r\n- [ ] Performance Improvements\r\n- [ ] Code Style\r\n- [ ] Refactoring\r\n- [ ] Documentation\r\n- [ ] Tests\r\n\r\nFor all the boxes checked, please include additional details of the\r\nchanges made in this pull request.\r\n\r\n## Testing Done\r\nAdded unit tests. This is one of the first few PRs to complete plugging\r\nin S3 storage. More test recipes to test with S3 storage will be added\r\nwhen S3 storage is completely plugged in. Currently tested\r\noh-hadoop-spark recipe to ensure that the HDFS integration is not broken\r\nby this change.\r\n\r\n- [x] Manually Tested on local docker setup. Please include commands\r\nran, and their output.\r\n- [ ] Added new tests for the changes made.\r\n- [ ] Updated existing tests to reflect the changes made.\r\n- [ ] No tests added or updated. Please explain why. If unsure, please\r\nfeel free to ask for help.\r\n- [ ] Some other form of testing like staging or soak time in\r\nproduction. Please explain.\r\n\r\nFor all the boxes checked, include a detailed description of the testing\r\ndone for the changes made in this pull request.\r\n\r\nDocker testing oh-hadoop-spark recipe:\r\n\r\n1. Create table:\r\n\r\n$ curl \"${curlArgs[@]}\" -XPOST\r\nhttp://localhost:8000/v1/databases/d3/tables/ \\\r\n> --data-raw '{\r\n> \"tableId\": \"t1\",\r\n> \"databaseId\": \"d3\",\r\n> \"baseTableVersion\": \"INITIAL_VERSION\",\r\n> \"clusterId\": \"LocalHadoopCluster\",\r\n> \"schema\": \"{\\\"type\\\": \\\"struct\\\", \\\"fields\\\": [{\\\"id\\\":\r\n1,\\\"required\\\": true,\\\"name\\\": \\\"id\\\",\\\"type\\\": \\\"string\\\"},{\\\"id\\\":\r\n2,\\\"required\\\": true,\\\"name\\\": \\\"name\\\",\\\"type\\\": \\\"string\\\"},{\\\"id\\\":\r\n3,\\\"required\\\": true,\\\"name\\\": \\\"ts\\\",\\\"type\\\": \\\"timestamp\\\"}]}\",\r\n> \"timePartitioning\": {\r\n> \"columnName\": \"ts\",\r\n> \"granularity\": \"HOUR\"\r\n> },\r\n> \"clustering\": [\r\n> {\r\n> \"columnName\": \"name\"\r\n> }\r\n> ],\r\n> \"tableProperties\": {\r\n> \"key\": \"value\"\r\n> }\r\n> }' | json_pp\r\n% Total % Received % Xferd Average Speed Time Time Time Current\r\nDload Upload Total Spent Left Speed\r\n100 2174 0 1600 100 574 548 196 0:00:02 0:00:02 --:--:-- 745\r\n{\r\n \"clusterId\" : \"LocalHadoopCluster\",\r\n \"clustering\" : [\r\n {\r\n \"columnName\" : \"name\",\r\n \"transform\" : null\r\n }\r\n ],\r\n \"creationTime\" : 1717098450027,\r\n \"databaseId\" : \"d3\",\r\n \"lastModifiedTime\" : 1717098450027,\r\n \"policies\" : null,\r\n\"schema\" :\r\n\"{\\\"type\\\":\\\"struct\\\",\\\"schema-id\\\":0,\\\"fields\\\":[{\\\"id\\\":1,\\\"name\\\":\\\"id\\\",\\\"required\\\":true,\\\"type\\\":\\\"string\\\"},{\\\"id\\\":2,\\\"name\\\":\\\"name\\\",\\\"required\\\":true,\\\"type\\\":\\\"string\\\"},{\\\"id\\\":3,\\\"name\\\":\\\"ts\\\",\\\"required\\\":true,\\\"type\\\":\\\"timestamp\\\"}]}\",\r\n \"tableCreator\" : \"DUMMY_ANONYMOUS_USER\",\r\n \"tableId\" : \"t1\",\r\n\"tableLocation\" :\r\n\"hdfs://namenode:9000/data/openhouse/d3/t1-30f90df6-4c45-47ee-916c-7cf0bdea5d4d/00000-5fec4fc6-4bf9-4bab-b6a9-112967a57497.metadata.json\",\r\n \"tableProperties\" : {\r\n \"key\" : \"value\",\r\n \"openhouse.clusterId\" : \"LocalHadoopCluster\",\r\n \"openhouse.creationTime\" : \"1717098450027\",\r\n \"openhouse.databaseId\" : \"d3\",\r\n \"openhouse.lastModifiedTime\" : \"1717098450027\",\r\n \"openhouse.tableCreator\" : \"DUMMY_ANONYMOUS_USER\",\r\n \"openhouse.tableId\" : \"t1\",\r\n\"openhouse.tableLocation\" :\r\n\"/data/openhouse/d3/t1-30f90df6-4c45-47ee-916c-7cf0bdea5d4d/00000-5fec4fc6-4bf9-4bab-b6a9-112967a57497.metadata.json\",\r\n \"openhouse.tableType\" : \"PRIMARY_TABLE\",\r\n \"openhouse.tableUUID\" : \"30f90df6-4c45-47ee-916c-7cf0bdea5d4d\",\r\n \"openhouse.tableUri\" : \"LocalHadoopCluster.d3.t1\",\r\n \"openhouse.tableVersion\" : \"INITIAL_VERSION\",\r\n \"policies\" : \"\",\r\n \"write.format.default\" : \"orc\",\r\n \"write.metadata.delete-after-commit.enabled\" : \"true\",\r\n \"write.metadata.previous-versions-max\" : \"28\"\r\n },\r\n \"tableType\" : \"PRIMARY_TABLE\",\r\n \"tableUUID\" : \"30f90df6-4c45-47ee-916c-7cf0bdea5d4d\",\r\n \"tableUri\" : \"LocalHadoopCluster.d3.t1\",\r\n \"tableVersion\" : \"INITIAL_VERSION\",\r\n \"timePartitioning\" : {\r\n \"columnName\" : \"ts\",\r\n \"granularity\" : \"HOUR\"\r\n }\r\n}\r\n\r\n2. Read table:\r\n$ curl \"${curlArgs[@]}\" -XGET\r\nhttp://localhost:8000/v1/databases/d3/tables/t1 | json_pp\r\n% Total % Received % Xferd Average Speed Time Time Time Current\r\nDload Upload Total Spent Left Speed\r\n100 1600 0 1600 0 0 22736 0 --:--:-- --:--:-- --:--:-- 22857\r\n{\r\n \"clusterId\" : \"LocalHadoopCluster\",\r\n \"clustering\" : [\r\n {\r\n \"columnName\" : \"name\",\r\n \"transform\" : null\r\n }\r\n ],\r\n \"creationTime\" : 1717098450027,\r\n \"databaseId\" : \"d3\",\r\n \"lastModifiedTime\" : 1717098450027,\r\n \"policies\" : null,\r\n\"schema\" :\r\n\"{\\\"type\\\":\\\"struct\\\",\\\"schema-id\\\":0,\\\"fields\\\":[{\\\"id\\\":1,\\\"name\\\":\\\"id\\\",\\\"required\\\":true,\\\"type\\\":\\\"string\\\"},{\\\"id\\\":2,\\\"name\\\":\\\"name\\\",\\\"required\\\":true,\\\"type\\\":\\\"string\\\"},{\\\"id\\\":3,\\\"name\\\":\\\"ts\\\",\\\"required\\\":true,\\\"type\\\":\\\"timestamp\\\"}]}\",\r\n \"tableCreator\" : \"DUMMY_ANONYMOUS_USER\",\r\n \"tableId\" : \"t1\",\r\n\"tableLocation\" :\r\n\"hdfs://namenode:9000/data/openhouse/d3/t1-30f90df6-4c45-47ee-916c-7cf0bdea5d4d/00000-5fec4fc6-4bf9-4bab-b6a9-112967a57497.metadata.json\",\r\n \"tableProperties\" : {\r\n \"key\" : \"value\",\r\n \"openhouse.clusterId\" : \"LocalHadoopCluster\",\r\n \"openhouse.creationTime\" : \"1717098450027\",\r\n \"openhouse.databaseId\" : \"d3\",\r\n \"openhouse.lastModifiedTime\" : \"1717098450027\",\r\n \"openhouse.tableCreator\" : \"DUMMY_ANONYMOUS_USER\",\r\n \"openhouse.tableId\" : \"t1\",\r\n\"openhouse.tableLocation\" :\r\n\"/data/openhouse/d3/t1-30f90df6-4c45-47ee-916c-7cf0bdea5d4d/00000-5fec4fc6-4bf9-4bab-b6a9-112967a57497.metadata.json\",\r\n \"openhouse.tableType\" : \"PRIMARY_TABLE\",\r\n \"openhouse.tableUUID\" : \"30f90df6-4c45-47ee-916c-7cf0bdea5d4d\",\r\n \"openhouse.tableUri\" : \"LocalHadoopCluster.d3.t1\",\r\n \"openhouse.tableVersion\" : \"INITIAL_VERSION\",\r\n \"policies\" : \"\",\r\n \"write.format.default\" : \"orc\",\r\n \"write.metadata.delete-after-commit.enabled\" : \"true\",\r\n \"write.metadata.previous-versions-max\" : \"28\"\r\n },\r\n \"tableType\" : \"PRIMARY_TABLE\",\r\n \"tableUUID\" : \"30f90df6-4c45-47ee-916c-7cf0bdea5d4d\",\r\n \"tableUri\" : \"LocalHadoopCluster.d3.t1\",\r\n \"tableVersion\" : \"INITIAL_VERSION\",\r\n \"timePartitioning\" : {\r\n \"columnName\" : \"ts\",\r\n \"granularity\" : \"HOUR\"\r\n }\r\n}\r\n\r\n3. List tables:\r\n$ curl \"${curlArgs[@]}\" -XGET\r\nhttp://localhost:8000/v1/databases/d3/tables/ | json_pp\r\n% Total % Received % Xferd Average Speed Time Time Time Current\r\nDload Upload Total Spent Left Speed\r\n100 1614 0 1614 0 0 30447 0 --:--:-- --:--:-- --:--:-- 31038\r\n{\r\n \"results\" : [\r\n {\r\n \"clusterId\" : \"LocalHadoopCluster\",\r\n \"clustering\" : [\r\n {\r\n \"columnName\" : \"name\",\r\n \"transform\" : null\r\n }\r\n ],\r\n \"creationTime\" : 1717098450027,\r\n \"databaseId\" : \"d3\",\r\n \"lastModifiedTime\" : 1717098450027,\r\n \"policies\" : null,\r\n\"schema\" :\r\n\"{\\\"type\\\":\\\"struct\\\",\\\"schema-id\\\":0,\\\"fields\\\":[{\\\"id\\\":1,\\\"name\\\":\\\"id\\\",\\\"required\\\":true,\\\"type\\\":\\\"string\\\"},{\\\"id\\\":2,\\\"name\\\":\\\"name\\\",\\\"required\\\":true,\\\"type\\\":\\\"string\\\"},{\\\"id\\\":3,\\\"name\\\":\\\"ts\\\",\\\"required\\\":true,\\\"type\\\":\\\"timestamp\\\"}]}\",\r\n \"tableCreator\" : \"DUMMY_ANONYMOUS_USER\",\r\n \"tableId\" : \"t1\",\r\n\"tableLocation\" :\r\n\"hdfs://namenode:9000/data/openhouse/d3/t1-30f90df6-4c45-47ee-916c-7cf0bdea5d4d/00000-5fec4fc6-4bf9-4bab-b6a9-112967a57497.metadata.json\",\r\n \"tableProperties\" : {\r\n \"key\" : \"value\",\r\n \"openhouse.clusterId\" : \"LocalHadoopCluster\",\r\n \"openhouse.creationTime\" : \"1717098450027\",\r\n \"openhouse.databaseId\" : \"d3\",\r\n \"openhouse.lastModifiedTime\" : \"1717098450027\",\r\n \"openhouse.tableCreator\" : \"DUMMY_ANONYMOUS_USER\",\r\n \"openhouse.tableId\" : \"t1\",\r\n\"openhouse.tableLocation\" :\r\n\"/data/openhouse/d3/t1-30f90df6-4c45-47ee-916c-7cf0bdea5d4d/00000-5fec4fc6-4bf9-4bab-b6a9-112967a57497.metadata.json\",\r\n \"openhouse.tableType\" : \"PRIMARY_TABLE\",\r\n\"openhouse.tableUUID\" : \"30f90df6-4c45-47ee-916c-7cf0bdea5d4d\",\r\n \"openhouse.tableUri\" : \"LocalHadoopCluster.d3.t1\",\r\n \"openhouse.tableVersion\" : \"INITIAL_VERSION\",\r\n \"policies\" : \"\",\r\n \"write.format.default\" : \"orc\",\r\n \"write.metadata.delete-after-commit.enabled\" : \"true\",\r\n \"write.metadata.previous-versions-max\" : \"28\"\r\n },\r\n \"tableType\" : \"PRIMARY_TABLE\",\r\n \"tableUUID\" : \"30f90df6-4c45-47ee-916c-7cf0bdea5d4d\",\r\n \"tableUri\" : \"LocalHadoopCluster.d3.t1\",\r\n \"tableVersion\" : \"INITIAL_VERSION\",\r\n \"timePartitioning\" : {\r\n \"columnName\" : \"ts\",\r\n \"granularity\" : \"HOUR\"\r\n }\r\n }\r\n ]\r\n}\r\n\r\n4. Delete table:\r\n$ curl \"${curlArgs[@]}\" -XDELETE\r\nhttp://localhost:8000/v1/databases/d3/tables/t1\r\nlajain-mn2:oh-hadoop-spark lajain$ curl \"${curlArgs[@]}\" -XGET\r\nhttp://localhost:8000/v1/databases/d3/tables/ | json_pp\r\n% Total % Received % Xferd Average Speed Time Time Time Current\r\nDload Upload Total Spent Left Speed\r\n100 14 0 14 0 0 331 0 --:--:-- --:--:-- --:--:-- 333\r\n{\r\n \"results\" : []\r\n}\r\n\r\nTesting via spark shell:\r\nscala> spark.sql(\"CREATE TABLE openhouse.db.tb (ts timestamp, col1\r\nstring, col2 string) PARTITIONED BY (days(ts))\").show()\r\n++\r\n||\r\n++\r\n++\r\n\r\n\r\nscala> spark.sql(\"DESCRIBE TABLE openhouse.db.tb\").show()\r\n+--------------+---------+-------+\r\n| col_name|data_type|comment|\r\n+--------------+---------+-------+\r\n| ts|timestamp| |\r\n| col1| string| |\r\n| col2| string| |\r\n| | | |\r\n|# Partitioning| | |\r\n| Part 0| days(ts)| |\r\n+--------------+---------+-------+\r\n\r\n\r\nscala> spark.sql(\"INSERT INTO TABLE openhouse.db.tb VALUES\r\n(current_timestamp(), 'val1', 'val2')\")\r\nres2: org.apache.spark.sql.DataFrame = []\r\n\r\nscala> spark.sql(\"INSERT INTO TABLE openhouse.db.tb VALUES\r\n(date_sub(CAST(current_timestamp() as DATE), 30), 'val1', 'val2')\")\r\nres3: org.apache.spark.sql.DataFrame = []\r\n\r\nscala> spark.sql(\"INSERT INTO TABLE openhouse.db.tb VALUES\r\n(date_sub(CAST(current_timestamp() as DATE), 60), 'val1', 'val2')\")\r\nres4: org.apache.spark.sql.DataFrame = []\r\n\r\nscala> spark.sql(\"SELECT * FROM openhouse.db.tb\").show()\r\n+--------------------+----+----+\r\n| ts|col1|col2|\r\n+--------------------+----+----+\r\n|2024-05-30 19:52:...|val1|val2|\r\n| 2024-03-31 00:00:00|val1|val2|\r\n| 2024-04-30 00:00:00|val1|val2|\r\n+--------------------+----+----+\r\n\r\n\r\nscala> spark.sql(\"SHOW TABLES IN openhouse.db\").show()\r\n+---------+---------+\r\n|namespace|tableName|\r\n+---------+---------+\r\n| db| tb|\r\n+---------+---------+\r\n\r\n\r\n# Additional Information\r\n\r\n- [ ] Breaking Changes\r\n- [ ] Deprecations\r\n- [x] Large PR broken into smaller PRs, and PR plan linked in the\r\ndescription.\r\n\r\nFor all the boxes checked, include additional details of the changes\r\nmade in this pull request.","shortMessageHtmlLink":"[PR1/5] Add S3 Storage type and S3StorageClient (#107)"}},{"before":"f84712e446a7e5332be4099147b3635a6bc4f05b","after":"d0c6583c7dd5b73c4140ef74d5672a71592ee9c9","ref":"refs/heads/main","pushedAt":"2024-05-30T00:04:22.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"HotSushi","name":"Sushant Raikar","path":"/HotSushi","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6441597?s=80&v=4"},"commit":{"message":"Move CatalogOperationsTest to `:openhouse-spark-itest` module (#112)\n\nMove CatalogOperationsTest to `:openhouse-spark-itest` module (#112)","shortMessageHtmlLink":"Move CatalogOperationsTest to :openhouse-spark-itest module (#112)"}},{"before":"b02fa9da7daae4e573efe8e61b841d1e1d3f2106","after":"f84712e446a7e5332be4099147b3635a6bc4f05b","ref":"refs/heads/main","pushedAt":"2024-05-29T21:39:42.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"HotSushi","name":"Sushant Raikar","path":"/HotSushi","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6441597?s=80&v=4"},"commit":{"message":"Fix 'io-impl' relocation in `openhouse-java-runtime` and `openhouse-spark-runtime` (#111)\n\n## Summary\r\nThis bug was discovered in the PR:\r\nhttps://github.com/linkedin/openhouse/pull/106\r\n\r\nAs a result of this bug, we need to specify the spark config as:\r\n```\r\n--conf spark.sql.catalog.openhouse.com.linkedin.openhouse.relocated.io-impl=XYZ\r\n```\r\ninstead of the right way:\r\n```\r\n--conf spark.sql.catalog.openhouse.io-impl=XYZ\r\n```\r\nThis bug was introduced because of incorrect relocation.\r\nThe jar before the change contains following code:\r\n\r\n![image](https://github.com/linkedin/openhouse/assets/6441597/9b95ab75-3174-4d1c-bd15-1ba06795f0ec)\r\n\r\n## Changes\r\n\r\n- [X] Bug Fixes\r\n\r\nFor all the boxes checked, please include additional details of the\r\nchanges made in this pull request.\r\n\r\n## Testing Done\r\n✅ build will succeed\r\n✅ new jar and code looks good:\r\n\r\n![image](https://github.com/linkedin/openhouse/assets/6441597/25257b3d-5fd1-4efc-9746-89205b1f0902)\r\n✅ older relocation looks good:\r\n\r\n![image](https://github.com/linkedin/openhouse/assets/6441597/166263c1-8890-447a-8b1c-a0215c90ce9c)\r\n\r\n- [X] Some other form of testing.","shortMessageHtmlLink":"Fix 'io-impl' relocation in openhouse-java-runtime and `openhouse-s…"}},{"before":"58a05a80c5dc35a65a18c3e7cab6f4370d0ad56c","after":"b02fa9da7daae4e573efe8e61b841d1e1d3f2106","ref":"refs/heads/main","pushedAt":"2024-05-28T21:40:05.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"teamurko","name":"Stas Pak","path":"/teamurko","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/270580?s=80&v=4"},"commit":{"message":"Remove date_trunc transform on a partition timestamp column in retention stmt (#110)\n\n## Summary\r\n\r\nThis is an optimization change to enable predicate-pushdown in delete\r\nstatement in retention job.\r\nSpark plan without the change:\r\n```\r\nscala> spark.sql(\"explain select count(*) from T where date_trunc('HOUR', datepartition) < date_trunc('HOUR', current_timestamp() - INTERVAL 72 HOURs)\").show(2000, false)\r\n... \r\n|== Physical Plan ==\r\nAdaptiveSparkPlan isFinalPlan=false\r\n+- HashAggregate(keys=[], functions=[count(1)])\r\n +- Exchange SinglePartition, ENSURE_REQUIREMENTS, [id=#46]\r\n +- HashAggregate(keys=[], functions=[partial_count(1)])\r\n +- Project\r\n +- Filter (date_trunc(HOUR, datepartition#188, Some(UTC)) < 1716652800000000)\r\n +- BatchScan openhouse.T[datepartition#188] openhouse.T [filters=]\r\n```\r\nSpark plan after the change:\r\n```\r\nscala> spark.sql(\"explain select count(*) from T where datepartition < date_trunc('HOUR', current_timestamp() - INTERVAL 72 HOURs)\").show(2000, false)\r\n...\r\n|== Physical Plan ==\r\nAdaptiveSparkPlan isFinalPlan=false\r\n+- HashAggregate(keys=[], functions=[count(1)])\r\n +- Exchange SinglePartition, ENSURE_REQUIREMENTS, [id=#71]\r\n +- HashAggregate(keys=[], functions=[partial_count(1)])\r\n +- Project\r\n +- Filter (isnotnull(datepartition#220) AND (datepartition#220 < 1716656400000000))\r\n +- BatchScan openhouse.T[datepartition#220] openhouse.T[filters=datepartition IS NOT NULL, datepartition < 1716656400000000]\r\n```\r\n```\r\nscala> spark.sql(\"explain delete from T where datepartition < date_trunc('HOUR', current_timestamp() - INTERVAL 72 HOURs)\").show(2000, false)\r\n...\r\n|== Physical Plan ==\r\nReplaceData IcebergBatchWrite(table=openhouse.T, format=ORC), org.apache.spark.sql.execution.datasources.v2.ExtendedDataSourceV2Strategy$$Lambda$4425/341350645@38e84b78\r\n+- AdaptiveSparkPlan isFinalPlan=false\r\n +- Project [...]\r\n +- Sort [_file#279 ASC NULLS FIRST, _pos#280L ASC NULLS FIRST], false, 0\r\n +- Filter NOT ((isnotnull(datepartition#252) AND (datepartition#252 < 1716656400000000)) <=> true)\r\n +- ...] openhouse.T [filters=datepartition IS NOT NULL, datepartition < 1716656400000000]\r\n +- HashAggregate(keys=[_file#279], functions=[])\r\n +- Exchange hashpartitioning(_file#279, 200), ENSURE_REQUIREMENTS, [id=#122]\r\n +- HashAggregate(keys=[_file#279], functions=[])\r\n +- Project [_file#279]\r\n +- Filter (isnotnull(datepartition#252) AND (datepartition#252 < 1716656400000000))\r\n +- ExtendedBatchScan[...] openhouse.T [filters=datepartition IS NOT NULL, datepartition < 1716656400000000]\r\n```\r\n## Changes\r\n\r\n- [ ] Client-facing API Changes\r\n- [ ] Internal API Changes\r\n- [ ] Bug Fixes\r\n- [ ] New Features\r\n- [x] Performance Improvements\r\n- [ ] Code Style\r\n- [ ] Refactoring\r\n- [ ] Documentation\r\n- [ ] Tests\r\n\r\nFor all the boxes checked, please include additional details of the\r\nchanges made in this pull request.\r\n\r\n## Testing Done\r\n\r\n\r\n- [ ] Manually Tested on local docker setup. Please include commands\r\nran, and their output.\r\n- [ ] Added new tests for the changes made.\r\n- [x] Updated existing tests to reflect the changes made.\r\n- [ ] No tests added or updated. Please explain why. If unsure, please\r\nfeel free to ask for help.\r\n- [x] Some other form of testing like staging or soak time in\r\nproduction. Please explain.\r\n\r\nRan explain on the stmt and existing tests pass.\r\n# Additional Information\r\n\r\n- [ ] Breaking Changes\r\n- [ ] Deprecations\r\n- [ ] Large PR broken into smaller PRs, and PR plan linked in the\r\ndescription.\r\n\r\nFor all the boxes checked, include additional details of the changes\r\nmade in this pull request.","shortMessageHtmlLink":"Remove date_trunc transform on a partition timestamp column in retent…"}},{"before":"46b1d0a4fd7d1c6f22acf44ed6eb3d2fdcce5763","after":"58a05a80c5dc35a65a18c3e7cab6f4370d0ad56c","ref":"refs/heads/main","pushedAt":"2024-05-22T18:44:13.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"rohitkum2506","name":"Rohit kumar","path":"/rohitkum2506","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1126602?s=80&v=4"},"commit":{"message":"Adding memory config param in JobConf for spark applications (#103)\n\n## Summary\r\n\r\nBy default, spark applications run with 1G memory. OH maintenance jobs\r\nfor tables with large size require larger memory in order to avoid OOM\r\nfailures.\r\nThe PR allows to set memory param in jobs POST request to set spark\r\nmemory config.\r\nMemory param only allows format: \r\ne.g: 4G, 10G, 250M\r\n\r\nSample POST request:\r\n` {\r\n \"jobName\": \"test_job\",\r\n \"clusterId\": \"local\",\r\n \"jobConf\": {\r\n \"jobType\": \"ORPHAN_FILES_DELETION\",\r\n \"proxyUser\": \"openhouse\",\r\n \"executionConf\": {\"memory\": \"4G\"},\r\n \"args\": [\r\n \"--tableName\", \"$table\",\r\n \"--trashDir\", \".trash\" \r\n ]\r\n }\r\n }`\r\n\r\n\r\n## Changes\r\n\r\n- [ ] Client-facing API Changes\r\n- [x] Internal API Changes\r\n- [ ] Bug Fixes\r\n- [x] New Features\r\n- [ ] Performance Improvements\r\n- [ ] Code Style\r\n- [ ] Refactoring\r\n- [ ] Documentation\r\n- [ ] Tests\r\n\r\nFor all the boxes checked, please include additional details of the\r\nchanges made in this pull request.\r\n\r\n## Testing Done\r\n\r\n- [ ] Manually Tested on local docker setup. Please include commands\r\nran, and their output.\r\n- [x] Added new tests for the changes made.\r\n- [ ] Updated existing tests to reflect the changes made.\r\n- [ ] No tests added or updated. Please explain why. If unsure, please\r\nfeel free to ask for help.\r\n- [x] Some other form of testing like staging or soak time in\r\nproduction. Please explain.\r\n\r\nBuild tests\r\nRan Setu jobs with memory config set in spark properties. ref: Image\r\n\"Screenshot\r\n\r\nDocker tests:\r\nPost with execution conf:\r\n\"Screenshot\r\n\r\nPost/Get without execution conf\r\n\"Screenshot\r\n\"Screenshot\r\n\r\nFor all the boxes checked, include a detailed description of the testing\r\ndone for the changes made in this pull request.\r\n\r\n# Additional Information\r\n\r\n- [ ] Breaking Changes\r\n- [ ] Deprecations\r\n- [ ] Large PR broken into smaller PRs, and PR plan linked in the\r\ndescription.\r\n\r\nFor all the boxes checked, include additional details of the changes\r\nmade in this pull request.","shortMessageHtmlLink":"Adding memory config param in JobConf for spark applications (#103)"}},{"before":"ca05f77e598d8522592a29632af32c62d065d348","after":"46b1d0a4fd7d1c6f22acf44ed6eb3d2fdcce5763","ref":"refs/heads/main","pushedAt":"2024-05-20T16:34:24.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"jainlavina","name":"Lavina Jain","path":"/jainlavina","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/114708561?s=80&v=4"},"commit":{"message":"Fix variable names to eliminate checkstyle warnings (#105)\n\n## Summary\r\n\r\nFix checkstyle warnings by fixing variable names as per Java naming\r\nconventions.\r\n\r\n## Changes\r\n\r\n- [ ] Client-facing API Changes\r\n- [ ] Internal API Changes\r\n- [ ] Bug Fixes\r\n- [ ] New Features\r\n- [ ] Performance Improvements\r\n- [x] Code Style\r\n- [ ] Refactoring\r\n- [ ] Documentation\r\n- [ ] Tests\r\n\r\nFor all the boxes checked, please include additional details of the\r\nchanges made in this pull request.\r\n\r\n## Testing Done\r\n\r\n- [x] Manually Tested on local docker setup. Please include commands\r\nran, and their output.\r\n- [ ] Added new tests for the changes made.\r\n- [ ] Updated existing tests to reflect the changes made.\r\n- [ ] No tests added or updated. Please explain why. If unsure, please\r\nfeel free to ask for help.\r\n- [ ] Some other form of testing like staging or soak time in\r\nproduction. Please explain.\r\n\r\nFor all the boxes checked, include a detailed description of the testing\r\ndone for the changes made in this pull request.\r\n\r\n# Additional Information\r\n\r\n- [ ] Breaking Changes\r\n- [ ] Deprecations\r\n- [ ] Large PR broken into smaller PRs, and PR plan linked in the\r\ndescription.\r\n\r\nFor all the boxes checked, include additional details of the changes\r\nmade in this pull request.","shortMessageHtmlLink":"Fix variable names to eliminate checkstyle warnings (#105)"}},{"before":"ca05f77e598d8522592a29632af32c62d065d348","after":null,"ref":"refs/heads/dlo","pushedAt":"2024-05-15T21:15:21.000Z","pushType":"branch_deletion","commitsCount":0,"pusher":{"login":"teamurko","name":"Stas Pak","path":"/teamurko","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/270580?s=80&v=4"}},{"before":null,"after":"ca05f77e598d8522592a29632af32c62d065d348","ref":"refs/heads/dlo","pushedAt":"2024-05-15T21:15:02.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"teamurko","name":"Stas Pak","path":"/teamurko","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/270580?s=80&v=4"},"commit":{"message":"Refactor: Remove FsStorageProvider from Table Service (#104)\n\nRefactor: Remove FsStorageProvider from Table Service (#104)","shortMessageHtmlLink":"Refactor: Remove FsStorageProvider from Table Service (#104)"}},{"before":"81c08873b170b7fc9709fd4ee2a69f57e680ccde","after":"ca05f77e598d8522592a29632af32c62d065d348","ref":"refs/heads/main","pushedAt":"2024-05-15T17:10:40.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"HotSushi","name":"Sushant Raikar","path":"/HotSushi","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6441597?s=80&v=4"},"commit":{"message":"Refactor: Remove FsStorageProvider from Table Service (#104)\n\nRefactor: Remove FsStorageProvider from Table Service (#104)","shortMessageHtmlLink":"Refactor: Remove FsStorageProvider from Table Service (#104)"}},{"before":"f9b2417ad9cf941df0ea5c546277eb7ef137a157","after":"81c08873b170b7fc9709fd4ee2a69f57e680ccde","ref":"refs/heads/main","pushedAt":"2024-05-13T20:22:25.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"HotSushi","name":"Sushant Raikar","path":"/HotSushi","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6441597?s=80&v=4"},"commit":{"message":"Refactor: Remove @LegacyFileIO and Make DelegationRefreshToken use new cluster.yaml (#100)\n\nRefactor: Remove @LegacyFileIO and Make DelegationRefreshToken use new cluster.yaml (#100)","shortMessageHtmlLink":"Refactor: Remove @LegacyFileIO and Make DelegationRefreshToken use ne…"}},{"before":"bc5287cfa86cea52c2215fbf9259b11992f18a42","after":"f9b2417ad9cf941df0ea5c546277eb7ef137a157","ref":"refs/heads/main","pushedAt":"2024-05-09T18:44:23.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"HotSushi","name":"Sushant Raikar","path":"/HotSushi","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6441597?s=80&v=4"},"commit":{"message":"Fix typo in SETUP.md (#101)\n\n## Summary\r\n\r\n\r\n\r\n[Issue](https://github.com/linkedin/openhouse/issues/#nnn)] Briefly\r\ndiscuss the summary of the changes made in this\r\npull request in 2-3 lines.\r\n\r\n## Changes\r\n\r\n- [ ] Client-facing API Changes\r\n- [ ] Internal API Changes\r\n- [ ] Bug Fixes\r\n- [ ] New Features\r\n- [ ] Performance Improvements\r\n- [ ] Code Style\r\n- [ ] Refactoring\r\n- [x] Documentation\r\n- [ ] Tests\r\n\r\nFor all the boxes checked, please include additional details of the\r\nchanges made in this pull request.\r\n\r\n## Testing Done\r\n\r\n\r\n- [ ] Manually Tested on local docker setup. Please include commands\r\nran, and their output.\r\n- [ ] Added new tests for the changes made.\r\n- [ ] Updated existing tests to reflect the changes made.\r\n- [ ] No tests added or updated. Please explain why. If unsure, please\r\nfeel free to ask for help.\r\n- [ ] Some other form of testing like staging or soak time in\r\nproduction. Please explain.\r\n\r\nFor all the boxes checked, include a detailed description of the testing\r\ndone for the changes made in this pull request.\r\n\r\n# Additional Information\r\n\r\n- [ ] Breaking Changes\r\n- [ ] Deprecations\r\n- [ ] Large PR broken into smaller PRs, and PR plan linked in the\r\ndescription.\r\n\r\nFor all the boxes checked, include additional details of the changes\r\nmade in this pull request.","shortMessageHtmlLink":"Fix typo in SETUP.md (#101)"}},{"before":"d8d136b6d286d6c13005bab5c9fcbcdc6ed95300","after":"bc5287cfa86cea52c2215fbf9259b11992f18a42","ref":"refs/heads/main","pushedAt":"2024-05-08T17:51:06.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"maluchari","name":"Malini Mahalakshmi Venkatachari","path":"/maluchari","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1776785?s=80&v=4"},"commit":{"message":"Fix NPE during empty table stats collection (#99)\n\n## Summary\r\n\r\nThis PR brings in the following fixed for a couple of issues observed.\r\n1. Fix an incorrect check for right timestamps of table creation and\r\nlast modified.\r\n2. Handle a case where a table does not have any data files.\r\n\r\nAlso added unit tests to address the above cases.\r\n\r\n## Changes\r\n\r\n- [ ] Client-facing API Changes\r\n- [ ] Internal API Changes\r\n- [X] Bug Fixes\r\n- [ ] New Features\r\n- [ ] Performance Improvements\r\n- [ ] Code Style\r\n- [ ] Refactoring\r\n- [ ] Documentation\r\n- [ ] Tests\r\n\r\n\r\n## Testing Done\r\n\r\n\r\n- [X] Manually Tested on local docker setup. Please include commands\r\nran, and their output.\r\n- [X] Added new tests for the changes made.\r\n- [ ] Updated existing tests to reflect the changes made.\r\n- [ ] No tests added or updated. Please explain why. If unsure, please\r\nfeel free to ask for help.\r\n- [ ] Some other form of testing like staging or soak time in\r\nproduction. Please explain.\r\n\r\nDocker testing:\r\n\r\nCreated an empty table. emptytbl\r\n\r\n`openhouse@6c67d19838cc:/opt/spark$ hdfs dfs -ls\r\n/data/openhouse/db/emptytbl-e0ff80fe-6eeb-4df9-9d04-9437444b0761/\r\nFound 1 items\r\n-rw-r--r-- 3 openhouse supergroup 2002 2024-05-08 06:46\r\n/data/openhouse/db/emptytbl-e0ff80fe-6eeb-4df9-9d04-9437444b0761/00000-69745f53-4772-44db-a180-919652f4f6da.metadata.json\r\n`\r\n\r\n```\r\nscala> spark.sql(\"show tblproperties openhouse.db.emptytbl\").show(false)\r\n+------------------------------------------+-------------------------------------------------------------------------------------------------------------------------+\r\n|key |value |\r\n+------------------------------------------+-------------------------------------------------------------------------------------------------------------------------+\r\n |\r\n|openhouse.creationTime |1715150780639 |\r\n |\r\n|openhouse.lastModifiedTime |1715150780639 |\r\n |\r\n|openhouse.tableId |emptytbl |\r\n|openhouse.tableVersion |INITIAL_VERSION |\r\n |\r\n+-------------------------\r\n```\r\n\r\nRight **tableCreationTimestamp** populated and also no NPE due to\r\nabsence of datafiles:\r\n\r\n`INFO spark.TableStatsCollectionSparkApp: Publishing stats for table:\r\ndb.emptytbl\",\"2024-05-08 07:56:04,464 INFO\r\nspark.TableStatsCollectionSparkApp:\r\n{\\\"totalReferencedDataFilesSizeInBytes\\\":0,\\\"numReferencedDataFiles\\\":0,\\\"totalDirectorySizeInBytes\\\":2002,\\\"numObjectsInDirectory\\\":1,\\\"numCurrentSnapshotReferencedDataFiles\\\":0,\\\"totalCurrentSnapshotReferencedDataFilesSizeInBytes\\\":0,\\\"numExistingMetadataJsonFiles\\\":1,\\\"numReferencedManifestFiles\\\":0,\\\"numReferencedManifestLists\\\":0,\\\"recordTimestamp\\\":1715154955889,\\\"clusterName\\\":\\\"LocalHadoopCluster\\\",\\\"databaseName\\\":\\\"db\\\",\\\"tableName\\\":\\\"emptytbl\\\",\\\"tableUUID\\\":\\\"e0ff80fe-6eeb-4df9-9d04-9437444b0761\\\",\\\"tableLocation\\\":\\\"/data/openhouse/db/emptytbl-e0ff80fe-6eeb-4df9-9d04-9437444b0761\\\",\\\"tableCreator\\\":\\\"openhouse\\\",\\\"tableCreationTimestamp\\\":1715150780639,\\\"tableLastUpdatedTimestamp\\\":1715150780639,\\\"tableType\\\":\\\"PRIMARY_TABLE\\\"}\",\"2024-05-08\r\n07:56:04,465`\r\n\r\n\r\n\r\nFor all the boxes checked, include a detailed description of the testing\r\ndone for the changes made in this pull request.\r\n\r\n# Additional Information\r\n\r\n- [ ] Breaking Changes\r\n- [ ] Deprecations\r\n- [ ] Large PR broken into smaller PRs, and PR plan linked in the\r\ndescription.\r\n\r\nFor all the boxes checked, include additional details of the changes\r\nmade in this pull request.","shortMessageHtmlLink":"Fix NPE during empty table stats collection (#99)"}},{"before":null,"after":"c2f93e394144fde97a0455485dd82eeec14e12ab","ref":"refs/heads/spark_catalog_proto","pushedAt":"2024-05-08T06:32:09.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"autumnust","name":"Lei","path":"/autumnust","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/5044213?s=80&v=4"},"commit":{"message":"Prototype sparkCatalog overwrite","shortMessageHtmlLink":"Prototype sparkCatalog overwrite"}},{"before":"9339553558d6d2e128f2a0b79c410f7d1c7710c9","after":"d8d136b6d286d6c13005bab5c9fcbcdc6ed95300","ref":"refs/heads/main","pushedAt":"2024-05-07T00:10:36.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"HotSushi","name":"Sushant Raikar","path":"/HotSushi","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6441597?s=80&v=4"},"commit":{"message":"Add storageType field to HouseTable Entity (#81)\n\n## Summary\r\n\r\n\r\n\r\n[Issue](https://github.com/linkedin/openhouse/issues/#80) Added a\r\nstorageType field to the HouseTable Entity to support multiple storage\r\ntypes.\r\n\r\nResolves #80 \r\n\r\n## Changes\r\n\r\n- [ ] Client-facing API Changes\r\n- [ ] Internal API Changes\r\n- [ ] Bug Fixes\r\n- [x] New Features\r\n- [ ] Performance Improvements\r\n- [ ] Code Style\r\n- [ ] Refactoring\r\n- [ ] Documentation\r\n- [x] Tests\r\n\r\nFor all the boxes checked, please include additional details of the\r\nchanges made in this pull request.\r\n\r\nAdded storageType field to the HouseTable DTO.\r\n\r\n## Testing Done\r\n\r\n\r\n- [ ] Manually Tested on local docker setup. Please include commands\r\nran, and their output.\r\n- [ ] Added new tests for the changes made.\r\n- [x] Updated existing tests to reflect the changes made.\r\n- [ ] No tests added or updated. Please explain why. If unsure, please\r\nfeel free to ask for help.\r\n- [ ] Some other form of testing like staging or soak time in\r\nproduction. Please explain.\r\n\r\nFor all the boxes checked, include a detailed description of the testing\r\ndone for the changes made in this pull request.\r\n\r\n# Additional Information\r\n\r\n- [ ] Breaking Changes\r\n- [ ] Deprecations\r\n- [ ] Large PR broken into smaller PRs, and PR plan linked in the\r\ndescription.\r\n\r\nFor all the boxes checked, include additional details of the changes\r\nmade in this pull request.\r\n\r\n---------\r\n\r\nCo-authored-by: Sushant Raikar ","shortMessageHtmlLink":"Add storageType field to HouseTable Entity (#81)"}},{"before":"d824024837fecae7a18b13c24fba8bf122a47796","after":"9339553558d6d2e128f2a0b79c410f7d1c7710c9","ref":"refs/heads/main","pushedAt":"2024-05-06T19:11:46.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"HotSushi","name":"Sushant Raikar","path":"/HotSushi","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6441597?s=80&v=4"},"commit":{"message":"Introduce FileIOManager and FileIO implementations for HDFS and Local Storage (#96)\n\nLaying foundations for storage part 4: `FileIOManager` and FileIO\r\nimplementations for `HDFS` and `Local`","shortMessageHtmlLink":"Introduce FileIOManager and FileIO implementations for HDFS and Local…"}},{"before":"041cdd36ce703fd1c8bf1cedd0efb06152f1567b","after":"d824024837fecae7a18b13c24fba8bf122a47796","ref":"refs/heads/main","pushedAt":"2024-05-01T23:02:34.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"autumnust","name":"Lei","path":"/autumnust","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/5044213?s=80&v=4"},"commit":{"message":"Ensure putSnapshot path honoring case insensitive contract (#85)\n\n## Summary\r\n\r\nThis is a bug fixes for put-snapshot code path, where UUID-extraction\r\nprocess is using `tableId` and `databaseId` from the request itself.\r\nThose ids, if directly obtained from top-level request body, can lost\r\ncasing if the requests were from platform like Spark SQL. Since the\r\nunderlying storage (e.g. HDFS ) is case sensitive in its path URL, we\r\nwill need to ensure the original casing info when issue the first commit\r\nas part of CTAS is preserved in the process of UUID-extraction of the\r\nsecond commit.\r\n\r\nThe other parts in this PR is to ensure, when `put` is happening, the\r\n`tableDto` is not always built from scratch when there's existing object\r\ndiscovered by `findById` method previously. This is done by switch\r\n`orElse` method to `orElseGet`, in which the latter will only call the\r\nsupplier lazily when the calling object is absent. This leads to a\r\nwasteful implementation as well as confusion on the code stack.\r\n\r\nThis PR also include:\r\n- Some refactoring to share code with existing code. \r\n- Ensure `text-fixtures` module has its repository horning case\r\ninsensitive contract so that this behavior can be tested through\r\nembedded instances.\r\n- Testing the casing contract in both SQL and Catalog API.","shortMessageHtmlLink":"Ensure putSnapshot path honoring case insensitive contract (#85)"}},{"before":"fe1f03eb18c6590cf5ea9a083499c18c939b9ee7","after":"041cdd36ce703fd1c8bf1cedd0efb06152f1567b","ref":"refs/heads/main","pushedAt":"2024-05-01T16:25:35.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"jainlavina","name":"Lavina Jain","path":"/jainlavina","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/114708561?s=80&v=4"},"commit":{"message":"Use fileIO to delete files in dropTable (#94)\n\n## Summary\r\n\r\nUse fileIO instead of filesystem to delete data and metadata files in\r\ndropTable.\r\nIt uses deletePrefix() and expects an instance of FileIO that supports\r\nprefix operations. If table operations are instantiated with a fileIO\r\ninstance that does not extend from SupportPrefixOperations then it will\r\nthrow an exception. But, that is ok because all popular FileIO\r\nimplementations like HadoopFileIO and major cloud providers support the\r\nprefix operations.\r\n\r\nThis change allows us to completely eliminate dependency on hadoop\r\nfilesystem for the catalog.\r\n\r\n## Changes\r\n\r\n- [ ] Client-facing API Changes\r\n- [ ] Internal API Changes\r\n- [x] Bug Fixes\r\n- [ ] New Features\r\n- [ ] Performance Improvements\r\n- [ ] Code Style\r\n- [ ] Refactoring\r\n- [ ] Documentation\r\n- [ ] Tests\r\n\r\nFor all the boxes checked, please include additional details of the\r\nchanges made in this pull request.\r\n\r\n## Testing Done\r\nTested using docker and existing e2e tests that cover dropTable.\r\nAlso, added logs and validated by manually inspecting logs in docker.\r\n\r\n- [x] Manually Tested on local docker setup. Please include commands\r\nran, and their output.\r\n- [ ] Added new tests for the changes made.\r\n- [ ] Updated existing tests to reflect the changes made.\r\n- [ ] No tests added or updated. Please explain why. If unsure, please\r\nfeel free to ask for help.\r\n- [ ] Some other form of testing like staging or soak time in\r\nproduction. Please explain.\r\n\r\nTested by inspecting namenode in docker.\r\n1. Created table lj_test_tbl in db \"ljdb\":\r\n$ curl \"${curlArgs[@]}\" -XPOST\r\nhttp://localhost:8000/v1/databases/ljdb/tables/ --data-raw '{\r\n \"tableId\": \"lj_test_tbl\",\r\n \"databaseId\": \"ljdb\",\r\n \"baseTableVersion\": \"INITIAL_VERSION\",\r\n \"clusterId\": \"LocalHadoopCluster\",\r\n .......\r\n\r\n2. Verified that the folder and files exist in hdfs:\r\n\"Screenshot\r\n\r\n3. Dropped table.\r\n$ curl \"${curlArgs[@]}\" -XDELETE\r\nhttp://localhost:8000/v1/databases/ljdb/tables/lj_test_tbl\r\n\r\n4. Verified on namenode in docker that the directory got deleted in\r\nhdfs:\r\n\"Screenshot\r\n\r\nDocker test output.\r\n1. Create table:\r\n$ curl \"${curlArgs[@]}\" -XPOST\r\nhttp://localhost:8000/v1/databases/d3/tables/ --data-raw '{\r\n \"tableId\": \"t1\",\r\n \"databaseId\": \"d3\",\r\n \"baseTableVersion\": \"INITIAL_VERSION\",\r\n \"clusterId\": \"LocalHadoopCluster\",\r\n\"schema\": \"{\\\"type\\\": \\\"struct\\\", \\\"fields\\\": [{\\\"id\\\": 1,\\\"required\\\":\r\ntrue,\\\"name\\\": \\\"id\\\",\\\"type\\\": \\\"string\\\"},{\\\"id\\\": 2,\\\"required\\\":\r\ntrue,\\\"name\\\": \\\"name\\\",\\\"type\\\": \\\"string\\\"},{\\\"id\\\": 3,\\\"required\\\":\r\ntrue,\\\"name\\\": \\\"ts\\\",\\\"type\\\": \\\"timestamp\\\"}]}\",\r\n \"timePartitioning\": {\r\n \"columnName\": \"ts\",\r\n \"granularity\": \"HOUR\"\r\n },\r\n \"clustering\": [\r\n {\r\n \"columnName\": \"name\"\r\n }\r\n ],\r\n \"tableProperties\": {\r\n \"key\": \"value\"\r\n }\r\n}' | json_pp\r\n% Total % Received % Xferd Average Speed Time Time Time Current\r\nDload Upload Total Spent Left Speed\r\n100 2152 0 1578 100 574 1218 443 0:00:01 0:00:01 --:--:-- 1663\r\n{\r\n \"clusterId\" : \"LocalHadoopCluster\",\r\n \"clustering\" : [\r\n {\r\n \"columnName\" : \"name\",\r\n \"transform\" : null\r\n }\r\n ],\r\n \"creationTime\" : 1714428252917,\r\n \"databaseId\" : \"d3\",\r\n \"lastModifiedTime\" : 1714428252917,\r\n \"policies\" : null,\r\n\"schema\" :\r\n\"{\\\"type\\\":\\\"struct\\\",\\\"schema-id\\\":0,\\\"fields\\\":[{\\\"id\\\":1,\\\"name\\\":\\\"id\\\",\\\"required\\\":true,\\\"type\\\":\\\"string\\\"},{\\\"id\\\":2,\\\"name\\\":\\\"name\\\",\\\"required\\\":true,\\\"type\\\":\\\"string\\\"},{\\\"id\\\":3,\\\"name\\\":\\\"ts\\\",\\\"required\\\":true,\\\"type\\\":\\\"timestamp\\\"}]}\",\r\n \"tableCreator\" : \"openhouse\",\r\n \"tableId\" : \"t1\",\r\n\"tableLocation\" :\r\n\"hdfs://namenode:9000/data/openhouse/d3/t1-eb5975fd-f68d-44d7-9fa4-9b5a4b98a7b3/00000-9076ff1b-5823-449f-b31c-d0d653f3e18f.metadata.json\",\r\n \"tableProperties\" : {\r\n \"key\" : \"value\",\r\n \"openhouse.clusterId\" : \"LocalHadoopCluster\",\r\n \"openhouse.creationTime\" : \"1714428252917\",\r\n \"openhouse.databaseId\" : \"d3\",\r\n \"openhouse.lastModifiedTime\" : \"1714428252917\",\r\n \"openhouse.tableCreator\" : \"openhouse\",\r\n \"openhouse.tableId\" : \"t1\",\r\n\"openhouse.tableLocation\" :\r\n\"/data/openhouse/d3/t1-eb5975fd-f68d-44d7-9fa4-9b5a4b98a7b3/00000-9076ff1b-5823-449f-b31c-d0d653f3e18f.metadata.json\",\r\n \"openhouse.tableType\" : \"PRIMARY_TABLE\",\r\n \"openhouse.tableUUID\" : \"eb5975fd-f68d-44d7-9fa4-9b5a4b98a7b3\",\r\n \"openhouse.tableUri\" : \"LocalHadoopCluster.d3.t1\",\r\n \"openhouse.tableVersion\" : \"INITIAL_VERSION\",\r\n \"policies\" : \"\",\r\n \"write.format.default\" : \"orc\",\r\n \"write.metadata.delete-after-commit.enabled\" : \"true\",\r\n \"write.metadata.previous-versions-max\" : \"28\"\r\n },\r\n \"tableType\" : \"PRIMARY_TABLE\",\r\n \"tableUUID\" : \"eb5975fd-f68d-44d7-9fa4-9b5a4b98a7b3\",\r\n \"tableUri\" : \"LocalHadoopCluster.d3.t1\",\r\n \"tableVersion\" : \"INITIAL_VERSION\",\r\n \"timePartitioning\" : {\r\n \"columnName\" : \"ts\",\r\n \"granularity\" : \"HOUR\"\r\n }\r\n}\r\n\r\n5. Get table:\r\n$ curl \"${curlArgs[@]}\" -XGET\r\nhttp://localhost:8000/v1/databases/d3/tables/t1 | json_pp\r\n% Total % Received % Xferd Average Speed Time Time Time Current\r\nDload Upload Total Spent Left Speed\r\n100 1578 0 1578 0 0 8131 0 --:--:-- --:--:-- --:--:-- 8092\r\n{\r\n \"clusterId\" : \"LocalHadoopCluster\",\r\n \"clustering\" : [\r\n {\r\n \"columnName\" : \"name\",\r\n \"transform\" : null\r\n }\r\n ],\r\n \"creationTime\" : 1714428252917,\r\n \"databaseId\" : \"d3\",\r\n \"lastModifiedTime\" : 1714428252917,\r\n \"policies\" : null,\r\n\"schema\" :\r\n\"{\\\"type\\\":\\\"struct\\\",\\\"schema-id\\\":0,\\\"fields\\\":[{\\\"id\\\":1,\\\"name\\\":\\\"id\\\",\\\"required\\\":true,\\\"type\\\":\\\"string\\\"},{\\\"id\\\":2,\\\"name\\\":\\\"name\\\",\\\"required\\\":true,\\\"type\\\":\\\"string\\\"},{\\\"id\\\":3,\\\"name\\\":\\\"ts\\\",\\\"required\\\":true,\\\"type\\\":\\\"timestamp\\\"}]}\",\r\n \"tableCreator\" : \"openhouse\",\r\n \"tableId\" : \"t1\",\r\n\"tableLocation\" :\r\n\"hdfs://namenode:9000/data/openhouse/d3/t1-eb5975fd-f68d-44d7-9fa4-9b5a4b98a7b3/00000-9076ff1b-5823-449f-b31c-d0d653f3e18f.metadata.json\",\r\n \"tableProperties\" : {\r\n \"key\" : \"value\",\r\n \"openhouse.clusterId\" : \"LocalHadoopCluster\",\r\n \"openhouse.creationTime\" : \"1714428252917\",\r\n \"openhouse.databaseId\" : \"d3\",\r\n \"openhouse.lastModifiedTime\" : \"1714428252917\",\r\n \"openhouse.tableCreator\" : \"openhouse\",\r\n \"openhouse.tableId\" : \"t1\",\r\n\"openhouse.tableLocation\" :\r\n\"/data/openhouse/d3/t1-eb5975fd-f68d-44d7-9fa4-9b5a4b98a7b3/00000-9076ff1b-5823-449f-b31c-d0d653f3e18f.metadata.json\",\r\n \"openhouse.tableType\" : \"PRIMARY_TABLE\",\r\n \"openhouse.tableUUID\" : \"eb5975fd-f68d-44d7-9fa4-9b5a4b98a7b3\",\r\n \"openhouse.tableUri\" : \"LocalHadoopCluster.d3.t1\",\r\n \"openhouse.tableVersion\" : \"INITIAL_VERSION\",\r\n \"policies\" : \"\",\r\n \"write.format.default\" : \"orc\",\r\n \"write.metadata.delete-after-commit.enabled\" : \"true\",\r\n \"write.metadata.previous-versions-max\" : \"28\"\r\n },\r\n \"tableType\" : \"PRIMARY_TABLE\",\r\n \"tableUUID\" : \"eb5975fd-f68d-44d7-9fa4-9b5a4b98a7b3\",\r\n \"tableUri\" : \"LocalHadoopCluster.d3.t1\",\r\n \"tableVersion\" : \"INITIAL_VERSION\",\r\n \"timePartitioning\" : {\r\n \"columnName\" : \"ts\",\r\n \"granularity\" : \"HOUR\"\r\n }\r\n}\r\n\r\n6. List tables in a db:\r\n$ curl \"${curlArgs[@]}\" -XGET\r\nhttp://localhost:8000/v1/databases/d3/tables/ | json_pp\r\n% Total % Received % Xferd Average Speed Time Time Time Current\r\nDload Upload Total Spent Left Speed\r\n100 1592 0 1592 0 0 25026 0 --:--:-- --:--:-- --:--:-- 25269\r\n{\r\n \"results\" : [\r\n {\r\n \"clusterId\" : \"LocalHadoopCluster\",\r\n \"clustering\" : [\r\n {\r\n \"columnName\" : \"name\",\r\n \"transform\" : null\r\n }\r\n ],\r\n \"creationTime\" : 1714428252917,\r\n \"databaseId\" : \"d3\",\r\n \"lastModifiedTime\" : 1714428252917,\r\n \"policies\" : null,\r\n\"schema\" :\r\n\"{\\\"type\\\":\\\"struct\\\",\\\"schema-id\\\":0,\\\"fields\\\":[{\\\"id\\\":1,\\\"name\\\":\\\"id\\\",\\\"required\\\":true,\\\"type\\\":\\\"string\\\"},{\\\"id\\\":2,\\\"name\\\":\\\"name\\\",\\\"required\\\":true,\\\"type\\\":\\\"string\\\"},{\\\"id\\\":3,\\\"name\\\":\\\"ts\\\",\\\"required\\\":true,\\\"type\\\":\\\"timestamp\\\"}]}\",\r\n \"tableCreator\" : \"openhouse\",\r\n \"tableId\" : \"t1\",\r\n\"tableLocation\" :\r\n\"hdfs://namenode:9000/data/openhouse/d3/t1-eb5975fd-f68d-44d7-9fa4-9b5a4b98a7b3/00000-9076ff1b-5823-449f-b31c-d0d653f3e18f.metadata.json\",\r\n \"tableProperties\" : {\r\n \"key\" : \"value\",\r\n \"openhouse.clusterId\" : \"LocalHadoopCluster\",\r\n \"openhouse.creationTime\" : \"1714428252917\",\r\n \"openhouse.databaseId\" : \"d3\",\r\n \"openhouse.lastModifiedTime\" : \"1714428252917\",\r\n \"openhouse.tableCreator\" : \"openhouse\",\r\n \"openhouse.tableId\" : \"t1\",\r\n\"openhouse.tableLocation\" :\r\n\"/data/openhouse/d3/t1-eb5975fd-f68d-44d7-9fa4-9b5a4b98a7b3/00000-9076ff1b-5823-449f-b31c-d0d653f3e18f.metadata.json\",\r\n \"openhouse.tableType\" : \"PRIMARY_TABLE\",\r\n\"openhouse.tableUUID\" : \"eb5975fd-f68d-44d7-9fa4-9b5a4b98a7b3\",\r\n \"openhouse.tableUri\" : \"LocalHadoopCluster.d3.t1\",\r\n \"openhouse.tableVersion\" : \"INITIAL_VERSION\",\r\n \"policies\" : \"\",\r\n \"write.format.default\" : \"orc\",\r\n \"write.metadata.delete-after-commit.enabled\" : \"true\",\r\n \"write.metadata.previous-versions-max\" : \"28\"\r\n },\r\n \"tableType\" : \"PRIMARY_TABLE\",\r\n \"tableUUID\" : \"eb5975fd-f68d-44d7-9fa4-9b5a4b98a7b3\",\r\n \"tableUri\" : \"LocalHadoopCluster.d3.t1\",\r\n \"tableVersion\" : \"INITIAL_VERSION\",\r\n \"timePartitioning\" : {\r\n \"columnName\" : \"ts\",\r\n \"granularity\" : \"HOUR\"\r\n }\r\n }\r\n ]\r\n}\r\n\r\n7. Drop table:\r\n$ curl \"${curlArgs[@]}\" -XDELETE\r\nhttp://localhost:8000/v1/databases/d3/tables/t1\r\n\r\n\r\n8. List tables in a db again:\r\n$ curl \"${curlArgs[@]}\" -XGET\r\nhttp://localhost:8000/v1/databases/d3/tables/ | json_pp\r\n% Total % Received % Xferd Average Speed Time Time Time Current\r\nDload Upload Total Spent Left Speed\r\n100 14 0 14 0 0 162 0 --:--:-- --:--:-- --:--:-- 160\r\n{\r\n \"results\" : []\r\n}\r\n\r\n# Additional Information\r\n\r\n- [ ] Breaking Changes\r\n- [ ] Deprecations\r\n- [ ] Large PR broken into smaller PRs, and PR plan linked in the\r\ndescription.\r\n\r\nFor all the boxes checked, include additional details of the changes\r\nmade in this pull request.","shortMessageHtmlLink":"Use fileIO to delete files in dropTable (#94)"}},{"before":"e002146b9a073c6581b2d1ce11d7d039e85eeaf3","after":"fe1f03eb18c6590cf5ea9a083499c18c939b9ee7","ref":"refs/heads/main","pushedAt":"2024-05-01T16:19:42.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"y242yang","name":"Ann Yiming Yang","path":"/y242yang","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/35815874?s=80&v=4"},"commit":{"message":"Pass the right ODD implementation class in jobs.yaml (#65)\n\n## Summary\r\n\r\nImplementation class name is wrong in jobs.yaml. Added new job type in\r\ndocumentation\r\n\r\n## Changes\r\n\r\n- [ ] Client-facing API Changes\r\n- [ ] Internal API Changes\r\n- [x] Bug Fixes\r\n- [ ] New Features\r\n- [ ] Performance Improvements\r\n- [ ] Code Style\r\n- [ ] Refactoring\r\n- [ ] Documentation\r\n- [ ] Tests\r\n\r\nFor all the boxes checked, please include additional details of the\r\nchanges made in this pull request.\r\n\r\n## Testing Done\r\n\r\n\r\n- [ ] Manually Tested on local docker setup. Please include commands\r\nran, and their output.\r\n- [ ] Added new tests for the changes made.\r\n- [ ] Updated existing tests to reflect the changes made.\r\n- [x] No tests added or updated. Please explain why. If unsure, please\r\nfeel free to ask for help.\r\n- [ ] Some other form of testing like staging or soak time in\r\nproduction. Please explain.\r\n\r\nNo class is named the old name. \r\n\r\n# Additional Information\r\n\r\n- [ ] Breaking Changes\r\n- [ ] Deprecations\r\n- [ ] Large PR broken into smaller PRs, and PR plan linked in the\r\ndescription.\r\n\r\nFor all the boxes checked, include additional details of the changes\r\nmade in this pull request.","shortMessageHtmlLink":"Pass the right ODD implementation class in jobs.yaml (#65)"}},{"before":"283d150c46635872d1bdbc36fc7f51df04eaeb26","after":"e002146b9a073c6581b2d1ce11d7d039e85eeaf3","ref":"refs/heads/main","pushedAt":"2024-04-29T18:52:25.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"teamurko","name":"Stas Pak","path":"/teamurko","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/270580?s=80&v=4"},"commit":{"message":"Log task counts per status in JobScheduler (#93)\n\n## Summary\r\n\r\nCurrently only success, cancel count and total count are logged. This\r\nincomplete information can mislead to think that success rate is slow.\r\nWe need to also show skipped count since many tasks will be skipped for\r\nsome tables, e.g. missing retention policy, replica, etc.\r\n\r\n## Changes\r\n\r\n- [ ] Client-facing API Changes\r\n- [ ] Internal API Changes\r\n- [ ] Bug Fixes\r\n- [ ] New Features\r\n- [ ] Performance Improvements\r\n- [x] Code Style\r\n- [x] Refactoring\r\n- [ ] Documentation\r\n- [ ] Tests\r\n\r\nFor all the boxes checked, please include additional details of the\r\nchanges made in this pull request.\r\n\r\n## Testing Done\r\n\r\n\r\n- [x] Manually Tested on local docker setup. Please include commands\r\nran, and their output.\r\n- [x] No tests added or updated. Please explain why. If unsure, please\r\nfeel free to ask for help.\r\n\r\nFor all the boxes checked, include a detailed description of the testing\r\ndone for the changes made in this pull request.\r\n\r\n# Additional Information\r\n\r\n- [ ] Breaking Changes\r\n- [ ] Deprecations\r\n- [ ] Large PR broken into smaller PRs, and PR plan linked in the\r\ndescription.\r\n\r\nFor all the boxes checked, include additional details of the changes\r\nmade in this pull request.","shortMessageHtmlLink":"Log task counts per status in JobScheduler (#93)"}},{"before":"52abbc539b1fef53d3b873655cb1cb70e7bdb6e5","after":"283d150c46635872d1bdbc36fc7f51df04eaeb26","ref":"refs/heads/main","pushedAt":"2024-04-29T17:42:51.000Z","pushType":"pr_merge","commitsCount":3,"pusher":{"login":"HotSushi","name":"Sushant Raikar","path":"/HotSushi","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6441597?s=80&v=4"},"commit":{"message":"Merge pull request #90 from HotSushi/storage\n\nIntroduce StorageManager and an Hdfs Implementation for Storage","shortMessageHtmlLink":"Merge pull request #90 from HotSushi/storage"}},{"before":"8a7cc973122c7eac62cff6f980c34292343a2f6a","after":null,"ref":"refs/heads/snapshot_casing","pushedAt":"2024-04-27T00:01:55.000Z","pushType":"branch_deletion","commitsCount":0,"pusher":{"login":"autumnust","name":"Lei","path":"/autumnust","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/5044213?s=80&v=4"}},{"before":null,"after":"8a7cc973122c7eac62cff6f980c34292343a2f6a","ref":"refs/heads/snapshot_casing","pushedAt":"2024-04-26T23:59:12.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"autumnust","name":"Lei","path":"/autumnust","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/5044213?s=80&v=4"},"commit":{"message":"Ensure table-service and put-snapshot is lasizy evaluating tableDto only when absent","shortMessageHtmlLink":"Ensure table-service and put-snapshot is lasizy evaluating tableDto o…"}},{"before":"8ec34297e48993544841e0959469380630715e7c","after":"52abbc539b1fef53d3b873655cb1cb70e7bdb6e5","ref":"refs/heads/main","pushedAt":"2024-04-26T21:01:40.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"maluchari","name":"Malini Mahalakshmi Venkatachari","path":"/maluchari","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1776785?s=80&v=4"},"commit":{"message":"Return CommitStateUnknownException in case of an internal server error to prevent commit abort leading to data loss. (#63)\n\n* Fix error handling to prevent commit abort from causing data loss\r\n\r\n* Update to capture responseBody as part of CommitStateUnknownException\r\n\r\n* Empty commit\r\n\r\n* Fix typo","shortMessageHtmlLink":"Return CommitStateUnknownException in case of an internal server erro…"}},{"before":"54791f32c747e0c414aec413bbfa4c2daa78d988","after":"8ec34297e48993544841e0959469380630715e7c","ref":"refs/heads/main","pushedAt":"2024-04-26T20:13:17.000Z","pushType":"pr_merge","commitsCount":3,"pusher":{"login":"jiang95-dev","name":"Levi Jiang","path":"/jiang95-dev","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/19853208?s=80&v=4"},"commit":{"message":"Merge pull request #89 from jiang95-dev/hts-concurrency\n\nHTS concurrent put and delete resolution -- Reopen and Refractor tests","shortMessageHtmlLink":"Merge pull request #89 from jiang95-dev/hts-concurrency"}},{"before":"b3e6f39a34d8db00c490d111ff85477018324053","after":"54791f32c747e0c414aec413bbfa4c2daa78d988","ref":"refs/heads/main","pushedAt":"2024-04-26T01:03:49.000Z","pushType":"pr_merge","commitsCount":2,"pusher":{"login":"sumedhsakdeo","name":"Sumedh Sakdeo","path":"/sumedhsakdeo","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/773250?s=80&v=4"},"commit":{"message":"Merge pull request #92 from HotSushi/classifier_uber_2\n\nFix build issues after classifier change","shortMessageHtmlLink":"Merge pull request #92 from HotSushi/classifier_uber_2"}},{"before":"8d6dc6aa418c1310bc5448f5036e46a12440e303","after":"b3e6f39a34d8db00c490d111ff85477018324053","ref":"refs/heads/main","pushedAt":"2024-04-25T21:26:34.000Z","pushType":"pr_merge","commitsCount":2,"pusher":{"login":"HotSushi","name":"Sushant Raikar","path":"/HotSushi","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6441597?s=80&v=4"},"commit":{"message":"Merge pull request #91 from HotSushi/classifier_uber\n\nChange classifier from 'all' -> 'uber'","shortMessageHtmlLink":"Merge pull request #91 from HotSushi/classifier_uber"}},{"before":"879c4290aeb24e91fc39e6552e8bb1c0811abd4d","after":"8d6dc6aa418c1310bc5448f5036e46a12440e303","ref":"refs/heads/main","pushedAt":"2024-04-25T20:32:17.000Z","pushType":"pr_merge","commitsCount":5,"pusher":{"login":"rohitkum2506","name":"Rohit kumar","path":"/rohitkum2506","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1126602?s=80&v=4"},"commit":{"message":"Merge pull request #83 from rohitkum2506/rohitkumar2506/RetentionLogicFix\n\nMade Retention Operation Lightweight by removing/fixing non-metadata ops","shortMessageHtmlLink":"Merge pull request #83 from rohitkum2506/rohitkumar2506/RetentionLogi…"}},{"before":"e27be64e730995eb65262f58889667e8015d2ce6","after":"879c4290aeb24e91fc39e6552e8bb1c0811abd4d","ref":"refs/heads/main","pushedAt":"2024-04-25T16:37:58.000Z","pushType":"pr_merge","commitsCount":2,"pusher":{"login":"HotSushi","name":"Sushant Raikar","path":"/HotSushi","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6441597?s=80&v=4"},"commit":{"message":"Merge pull request #82 from HotSushi/storage\n\nAdd interfaces for StorageClient, Storage and its Implementation for Local","shortMessageHtmlLink":"Merge pull request #82 from HotSushi/storage"}},{"before":"75b51b0ddc4c76268ef94b595567d58a5a12387f","after":"e27be64e730995eb65262f58889667e8015d2ce6","ref":"refs/heads/main","pushedAt":"2024-04-24T11:33:09.000Z","pushType":"pr_merge","commitsCount":2,"pusher":{"login":"sumedhsakdeo","name":"Sumedh Sakdeo","path":"/sumedhsakdeo","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/773250?s=80&v=4"},"commit":{"message":"Merge pull request #88 from linkedin/revert-86-gradle_module\n\nRevert \"publish gradle module metadata\"","shortMessageHtmlLink":"Merge pull request #88 from linkedin/revert-86-gradle_module"}}],"hasNextPage":true,"hasPreviousPage":false,"activityType":"all","actor":null,"timePeriod":"all","sort":"DESC","perPage":30,"cursor":"djE6ks8AAAAEWIDETwA","startCursor":null,"endCursor":null}},"title":"Activity · linkedin/openhouse"}