{"payload":{"feedbackUrl":"https://github.com/orgs/community/discussions/53140","repo":{"id":37908454,"defaultBranch":"master","name":"seastar","ownerLogin":"avikivity","currentUserCanPush":false,"isFork":true,"isEmpty":false,"createdAt":"2015-06-23T09:03:46.000Z","ownerAvatar":"https://avatars.githubusercontent.com/u/1017210?v=4","public":true,"private":false,"isOrgOwned":false},"refInfo":{"name":"","listCacheKey":"v0:1710953930.0","currentOid":""},"activityList":{"items":[{"before":"ff55fa76507a9a3260919c18655d4a97102f3987","after":"99d2465ee3eb563a1ab2d7c9e9da52b7b03b8970","ref":"refs/heads/madvise-collapse","pushedAt":"2024-05-18T12:11:45.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"avikivity","name":"Avi Kivity","path":"/avikivity","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1017210?s=80&v=4"},"commit":{"message":"smp: allocate hugepages eagerly when kernel support is available\n\nInstead of deferring merging pages into hugepages to the transparent\nhugepage scanner, advise the kernel to do so immediately using the new\nMADV_POPULATE_WRITE and MADV_COLLAPSE advices.\n\nRefactor the prefaulter to attempt first to use MAP_POPULATE_WRITE\nto fault in a whole hugepage's worth of memory. This should fault\nthe range as a hugepage but for good measure use MADV_COLLAPSE too\n(which would be a no-op if the work was done in MADV_POPULATE_WRITE).","shortMessageHtmlLink":"smp: allocate hugepages eagerly when kernel support is available"}},{"before":null,"after":"dff152dd3a3dc140b594faf6e58318af43b8abbe","ref":"refs/heads/aligned-new-standard","pushedAt":"2024-03-20T16:58:50.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"avikivity","name":"Avi Kivity","path":"/avikivity","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1017210?s=80&v=4"},"commit":{"message":"memory: drop support for compilers that don't support aligned new\n\nAligned new (__cpp_aligned_new) was introduced in C++17 and compilers\nsupport it since 2018. Remove the conditional compilation and refuse\nto build on compilers that are too old to support it (we require C++20+\nanyway).","shortMessageHtmlLink":"memory: drop support for compilers that don't support aligned new"}},{"before":"52a65cba518a49b94e6a9d356e1f6d66ba544a20","after":"ff55fa76507a9a3260919c18655d4a97102f3987","ref":"refs/heads/madvise-collapse","pushedAt":"2024-03-16T19:29:51.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"avikivity","name":"Avi Kivity","path":"/avikivity","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1017210?s=80&v=4"},"commit":{"message":"smp: advise kernel to collapse transparent hugepages eagerly when prefaulting\n\nInstead of deferring merging pages into hugepages to the transparent\nhugepage scanner, advise the kernel to do so immediately using the new\nMADV_COLLAPSE advise.\n\nSince the advice only takes hold if at least one page is faulted in,\ndo so in the prefaulter. To reduce the need to do extra work, do it\nafter exactly one page was prefaulted; this will limit the need to\nmove pages around if they were not prefaulted as a huge page to begin\nwith.\n\nNote that prefaulting usually generates huge pages anyway; this makes\nthe process more reliable in case memory was scrambled.","shortMessageHtmlLink":"smp: advise kernel to collapse transparent hugepages eagerly when pre…"}},{"before":"67b682622eedeff243a936ef446f38b38a732b2b","after":"52a65cba518a49b94e6a9d356e1f6d66ba544a20","ref":"refs/heads/madvise-collapse","pushedAt":"2024-03-16T19:23:21.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"avikivity","name":"Avi Kivity","path":"/avikivity","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1017210?s=80&v=4"},"commit":{"message":"smp: advise kernel to collapse transparent hugepages eagerly when prefaulting\n\nInstead of deferring merging pages into hugepages to the transparent\nhugepage scanner, advise the kernel to do so immediately using the new\nMADV_COLLAPSE advise.\n\nSince the advice only takes hold if at least one page is faulted in,\ndo so in the prefaulter. To reduce the need to do extra work, do it\nafter exactly one page was prefaulted; this will limit the need to\nmove pages around if they were not prefaulted as a huge page to begin\nwith.\n\nNote that prefaulting usually generates huge pages anyway; this makes\nthe process more reliable in case memory was scrambled.","shortMessageHtmlLink":"smp: advise kernel to collapse transparent hugepages eagerly when pre…"}},{"before":null,"after":"67b682622eedeff243a936ef446f38b38a732b2b","ref":"refs/heads/madvise-collapse","pushedAt":"2024-03-16T19:19:19.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"avikivity","name":"Avi Kivity","path":"/avikivity","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1017210?s=80&v=4"},"commit":{"message":"smp: advise kernel to collapse transparent hugepages eagerly when prefaulting\n\nInstead of deferring merging pages into hugepages to the transparent\nhugepage scanner, advise the kernel to do so immediately using the new\nMADV_COLLAPSE advise.\n\nSince the advice only takes hold if at least one page is faulted in,\ndo so in the prefaulter. To reduce the need to do extra work, do it\nafter exactly one page was prefaulted; this will limit the need to\nmove pages around if they were not prefaulted as a huge page to begin\nwith.\n\nNote that prefaulting usually generates huge pages anyway; this makes\nthe process more reliable in case memory was scrambled.","shortMessageHtmlLink":"smp: advise kernel to collapse transparent hugepages eagerly when pre…"}},{"before":"3dd20e5747b61b0ee8e61accf2cb5a692874cf1b","after":"67b682622eedeff243a936ef446f38b38a732b2b","ref":"refs/heads/master","pushedAt":"2024-03-16T19:18:06.000Z","pushType":"push","commitsCount":1266,"pusher":{"login":"avikivity","name":"Avi Kivity","path":"/avikivity","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1017210?s=80&v=4"},"commit":{"message":"smp: advise kernel to collapse transparent hugepages eagerly when prefaulting\n\nInstead of deferring merging pages into hugepages to the transparent\nhugepage scanner, advise the kernel to do so immediately using the new\nMADV_COLLAPSE advise.\n\nSince the advice only takes hold if at least one page is faulted in,\ndo so in the prefaulter. To reduce the need to do extra work, do it\nafter exactly one page was prefaulted; this will limit the need to\nmove pages around if they were not prefaulted as a huge page to begin\nwith.\n\nNote that prefaulting usually generates huge pages anyway; this makes\nthe process more reliable in case memory was scrambled.","shortMessageHtmlLink":"smp: advise kernel to collapse transparent hugepages eagerly when pre…"}},{"before":"97d3b44a711b59dddd855b248d628f645b0c4fab","after":"6d5b9e3e347e7b4848af3ed266d76f65a60741cd","ref":"refs/heads/membarrier-restore-scaling","pushedAt":"2024-03-13T21:54:00.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"avikivity","name":"Avi Kivity","path":"/avikivity","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1017210?s=80&v=4"},"commit":{"message":"membarrier: cooperatively serialize calls to sys_membarrier\n\nIn [1], Linux started serializing sys_membarrier calls with a mutex,\nciting slowdowns. In fact we observed such slowdowns ([2]) on AWS d3en\ninstances.\n\nWhile the reactor calling sys_membarrier is going to sleep and wouldn't\nmind a slowdown, this can cause a large increase in wakeup latency.\nIf many reactors are simulataneously trying to sleep, they will have to\nwait until they get to acquire the mutex before ever getting a chance\nto poll for new events.\n\nFix this by only allowing one reactor to attempt to generate a barrier\nat a time; if it's racing with another reactor, we'll just return false\nand go back to polling. This returns our wakeup latency to normal.\n\nIn fact, we already had this lock before - this is a partial revert of [3].\nIt's different in that it now applies to both madvise() based barriers\nand membarrier() based barriers, whereas before it only applied to\nmprotect() based barriers (a predecessor of madvise() barriers). Since\nmadvise() barriers are only used on ancient kernels, the effort to check\nwhether locking is advisable is not worthwhile.\n\n[1] https://github.com/torvalds/linux/commit/944d5fe50f3f03daacfea16300e656a1691c4a23\n[2] https://github.com/scylladb/scylladb/issues/17207\n[3] https://github.com/scylladb/seastar/commit/77a58e4dc020233f66fccb8d9e8f7a8b7f9210c4","shortMessageHtmlLink":"membarrier: cooperatively serialize calls to sys_membarrier"}},{"before":null,"after":"97d3b44a711b59dddd855b248d628f645b0c4fab","ref":"refs/heads/membarrier-restore-scaling","pushedAt":"2024-03-13T21:49:32.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"avikivity","name":"Avi Kivity","path":"/avikivity","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1017210?s=80&v=4"},"commit":{"message":"membarrier: cooperatively serialize calls to sys_membarrier\n\nIn [1], Linux started serializing sys_membarrier calls with a mutex,\nciting slowdowns. In fact we observed such slowdowns ([2]) on AWS d3en\ninstances.\n\nWhile the reactor calling sys_membarrier is going to sleep and wouldn't\nmind a slowdown, this can cause a large increase in wakeup latency.\nIf many reactors are simulataneously trying to sleep, they will have to\nwait until they get to acquire the mutex before ever getting a chance\nto poll for new events.\n\nFix this by only allowing one reactor to attempt to generate a barrier\nat a time; if it's racing with another reactor, we'll just return false\nand go back to polling. This returns our wakeup latency to normal.\n\nIn fact, we already had this lock before - this is a partial revert of [3].\nIt's different in that it now applies to both madvise() based barriers\nand membarrier() based barriers, whereas before it only applied to\nmprotect() based barriers (a predecessor of madvise() barriers). Since\nmadvise() barriers are only used on ancient kernels, the effort to check\nwhether locking is advisable is not worthwhile.\n\n[1] https://github.com/torvalds/linux/commit/944d5fe50f3f03daacfea16300e656a1691c4a23\n[2] https://github.com/scylladb/scylladb/issues/17207\n[3] https://github.com/scylladb/seastar/commit/77a58e4dc020233f66fccb8d9e8f7a8b7f9210c4","shortMessageHtmlLink":"membarrier: cooperatively serialize calls to sys_membarrier"}},{"before":null,"after":"9904dddd4601575b6bce319ebf7402214412cdde","ref":"refs/heads/thread-debug-speedup","pushedAt":"2024-02-25T13:02:24.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"avikivity","name":"Avi Kivity","path":"/avikivity","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1017210?s=80&v=4"},"commit":{"message":"thread: speed up thread creation in debug mode\n\nIn debug mode, we clear the stack to avoid the address sanitizer\nthinking we're reading from uninitialized memory (although, typically,\nprograms write to the stack before reading from it). The problem is\nthat we use fill_n, which in debug mode doesn't get optimized into\na memset as every write has to pass through asan. This shows up in\nprofiles as the work to initialize the thread stack can be larger\nthan the work the thread does.\n\nFix be using memset() to clear the stack. memset() is intercepted by\nasan and is treated as a unit.","shortMessageHtmlLink":"thread: speed up thread creation in debug mode"}},{"before":null,"after":"d599be11e9e1a1a85ee110c54ab856b52c371f44","ref":"refs/heads/smp-count-non-static","pushedAt":"2024-02-18T18:30:54.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"avikivity","name":"Avi Kivity","path":"/avikivity","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1017210?s=80&v=4"},"commit":{"message":"smp: make smp::count non-global\n\nIn e6463df8a03616a509393ceb0212c3ca6ed57ab6 (\"smp: allow having\nmultiple instances of the smp class\") we made every global in smp\na thread local, but forgot smp::count. This patch makes smp::count\nlocal too, so we can have multiple smp instances with different shard\ncounts.\n\nUnfortunately, this can break alien threads that access smp::count\ndirectly, as evidenced by the adjustments needed for alien_test.","shortMessageHtmlLink":"smp: make smp::count non-global"}},{"before":"c41d561e137f1bc3d487ea472de3449f5a5624b0","after":"0d36329f931b1c436b1ac7faf31f793f27b47e69","ref":"refs/heads/uring-poll-first","pushedAt":"2024-02-11T16:40:24.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"avikivity","name":"Avi Kivity","path":"/avikivity","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1017210?s=80&v=4"},"commit":{"message":"reactor: io_uring: poll first on send/recv\n\nOur speculation mechanism tries to predict if data is available in the\nsocket (recv) or if there is room in the write buffer (send) and issues\nthe syscall directly if so. It may make more sense to change it later,\nbut for now, the expectation is that the request will not be\ncompleted immediately.\n\nMake use of this information by setting the IORING_RECVSEND_POLL_FIRST\nflag. This tells io_uring not to try to complete the request immediately,\nand instead issue a poll first.","shortMessageHtmlLink":"reactor: io_uring: poll first on send/recv"}},{"before":null,"after":"c41d561e137f1bc3d487ea472de3449f5a5624b0","ref":"refs/heads/uring-poll-first","pushedAt":"2024-02-11T16:39:53.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"avikivity","name":"Avi Kivity","path":"/avikivity","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1017210?s=80&v=4"},"commit":{"message":"reactor: io_uring: poll first on send/recv\n\nOur speculation mechanism tries to predict if data is available in the\nsocket (recv) or if there is room in the write buffer (send) and issues\nthe syscall directly if so. It may make more sense to change it later,\nbut for now, the expectation is that the request will not be\ncompleted immediately.\n\nMake use of this information by setting the IORING_RECVSEND_POLL_FIRST\nflag. This tells io_uring not to try to complete the request immediately,\nand instead issue a poll first.","shortMessageHtmlLink":"reactor: io_uring: poll first on send/recv"}},{"before":"5c03ccc820eb3d577a60ce659a3fc2da5260ed0c","after":"9685e8ee4a359ada08e5ed7d3bb2dab083065270","ref":"refs/heads/uring-opt","pushedAt":"2024-02-11T15:19:52.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"avikivity","name":"Avi Kivity","path":"/avikivity","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1017210?s=80&v=4"},"commit":{"message":"reactor: io_uring: enable some optimization flags\n\nEnable some optimization flags in an attempt to improve performance\nwith io_uring:\n\nIORING_SETUP_COOP_TASKRUN - prevents a completion from interrupting the\nreactor if it is running. Requires that the reactor issue an io_uring_enter\nsystem call in a timely fashion, but thanks to the task quota timer, we do.\n\nIORING_SETUP_TASKRUN_FLAG - sets up a flag that notifies the reactor\nthat the kernel has pending completions that it did not process. This\nallows the reactor to issue an io_uring_enter even if it has no pending\nsubmission queue entries or completion queue entries (e.g. it indicates\na third queue, in the kernel, is not empty).\n\nIORING_SETUP_SINGLE_ISSUER - elides some locking by guaranteeing that only\na single thread plays with the ring; this happens to be true for us.\n\nIORING_SETUP_DEFER_TASKRUN - batches up completion processing in an\nattempt to get some more performance.\n\nThis flags bump up the dependencies to Linux 6.1 and liburing 2.2. This\nseems worthwhile as right now io-uring lags behind linux-aio (which processes\ncompletions from interrupt context and therefore doesn't need all these\noptimizations).\n\nAfter this exercise, io_uring is still slower than linux-aio.","shortMessageHtmlLink":"reactor: io_uring: enable some optimization flags"}},{"before":null,"after":"5c03ccc820eb3d577a60ce659a3fc2da5260ed0c","ref":"refs/heads/uring-opt","pushedAt":"2024-02-09T18:23:36.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"avikivity","name":"Avi Kivity","path":"/avikivity","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1017210?s=80&v=4"},"commit":{"message":"reactor: io_uring: enable some optimization flags\n\nEnable some optimization flags in an attempt to improve performance\nwith io_uring:\n\nIORING_SETUP_COOP_TASKRUN - prevents a completion from interrupting the\nreactor if it is running. Requires that the reactor issue an io_uring_enter\nsystem call in a timely fashion, but thanks to the task quota timer, we do.\n\nIORING_SETUP_TASKRUN_FLAG - sets up a flag that notifies the reactor\nthat the kernel has pending completions that it did not process. This\nallows the reactor to issue an io_uring_enter even if it has no pending\nsubmission queue entries or completion queue entries (e.g. it indicates\na third queue, in the kernel, is not empty).\n\nIORING_SETUP_SINGLE_ISSUER - elides some locking by guaranteeing that only\na single thread plays with the ring; this happens to be true for us.\n\nIORING_SETUP_DEFER_TASKRUN - batches up completion processing in an\nattempt to get some more performance.\n\nThis flags bump up the dependencies to Linux 6.1 and liburing 2.2. This\nseems worthwhile as right now io-uring lags behind linux-aio (which processes\ncompletions from interrupt context and therefore doesn't need all these\noptimizations). However, I don't know how to specify the liburing\nversion requirement.\n\nAfter this exercise, io_uring is still slower than linux-aio.","shortMessageHtmlLink":"reactor: io_uring: enable some optimization flags"}},{"before":null,"after":"394847e7e6509582eb6469f1bd5e8951ef55ee9d","ref":"refs/heads/clang-18","pushedAt":"2024-02-01T16:06:07.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"avikivity","name":"Avi Kivity","path":"/avikivity","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1017210?s=80&v=4"},"commit":{"message":"docker: install clang-tools-18\n\nto address the build failure caused by\nhttps://github.com/llvm/llvm-project/issues/59827, and to test\nthe build with C++23, we need to use clang-18. which will be\nreleased in March 2023.\n\nSigned-off-by: Kefu Chai \n\nCloses scylladb/seastar#2058\n\n[avi: regenerate toolchain]","shortMessageHtmlLink":"docker: install clang-tools-18"}},{"before":null,"after":"3af32ec7adf796bc926e8ec80ba44091e8dd6441","ref":"refs/heads/get0->get","pushedAt":"2024-02-01T15:55:39.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"avikivity","name":"Avi Kivity","path":"/avikivity","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1017210?s=80&v=4"},"commit":{"message":"treewide: replace calls to future::get0() by calls to future::get()\n\nget0() is a remnant of the days of variadic futures.\n\nTo prepare for its deprecation, stop using it internally.","shortMessageHtmlLink":"treewide: replace calls to future::get0() by calls to future::get()"}},{"before":null,"after":"2a9ee05341f41e8e9b230caa807717b7c31b77cd","ref":"refs/heads/coroutine-devariadicate","pushedAt":"2024-01-31T13:53:21.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"avikivity","name":"Avi Kivity","path":"/avikivity","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1017210?s=80&v=4"},"commit":{"message":"coroutine: remove remnants of variadic futures\n\nCoroutine support still contains some remnants of variadic futures;\nremove them to simplify the code.\n\nThe `awaiter` struct had three specializations, one for void, one\nfor one template parameter, and one for {zero, two+} parameters.\nThe first two remain.\n\nwithout_preemption_check() also had three specializations, only one\nremains.","shortMessageHtmlLink":"coroutine: remove remnants of variadic futures"}},{"before":null,"after":"6e2e61ddce3be18ae0b2c12812dde6976bf31f12","ref":"refs/heads/deprecate-future-get0","pushedAt":"2024-01-28T14:01:36.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"avikivity","name":"Avi Kivity","path":"/avikivity","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1017210?s=80&v=4"},"commit":{"message":"future: deprecate future::get0()\n\nIn the days when futures carried tuples, it was useful to get\nthe first/only element. But it's now a synonym for get(), so\ndeprecate it.","shortMessageHtmlLink":"future: deprecate future::get0()"}},{"before":"b8326d4e9b2040ffd1d2f182ae720fb49182a79e","after":"e01f8817750aad1b3d0ad35f8a2a5dab282c5239","ref":"refs/heads/tighten-107852-check","pushedAt":"2024-01-24T18:52:11.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"avikivity","name":"Avi Kivity","path":"/avikivity","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1017210?s=80&v=4"},"commit":{"message":"build: prevent gcc -Wstringop-overflow workaround from affecting clang\n\nWe check for a gcc-specific bug, but because we inject warning flags,\nthe test fails on newer versions of clang (since it doesn't like the\nwarning flags). Due to the failure, we then inject the workaround for\nclang builds, which again fails.\n\nFix that by only applying the workaround for gcc.","shortMessageHtmlLink":"build: prevent gcc -Wstringop-overflow workaround from affecting clang"}},{"before":null,"after":"c9617a8485d68dacabbf87f246dd0124e08b8ab8","ref":"refs/heads/future-detuple-get0-return-type","pushedAt":"2024-01-24T18:28:59.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"avikivity","name":"Avi Kivity","path":"/avikivity","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1017210?s=80&v=4"},"commit":{"message":"future: remove tuples from get0_return_type\n\nget0_return_type uses some complicated type manipulations to translate\nvoid types; these date from the days when we supported variadic futures.\n\nWe don't any more, so simplify.","shortMessageHtmlLink":"future: remove tuples from get0_return_type"}},{"before":null,"after":"b8326d4e9b2040ffd1d2f182ae720fb49182a79e","ref":"refs/heads/tighten-107852-check","pushedAt":"2024-01-24T12:25:34.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"avikivity","name":"Avi Kivity","path":"/avikivity","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1017210?s=80&v=4"},"commit":{"message":"build: prevent gcc -Wstringop-overflow workaround from affecting clang\n\nWe check for a gcc-specific bug, but because we inject warning flags,\nthe test fails on newer versions of clang (since it doesn't like the\nwarning flags). Due to the failure, we then inject the workaround for\nclang builds, which again fails.\n\nFix that by only applying the workaround for gcc.","shortMessageHtmlLink":"build: prevent gcc -Wstringop-overflow workaround from affecting clang"}},{"before":null,"after":"0e1f3a31918823806e4e812838e13d5ef3129a52","ref":"refs/heads/circular-buffer-uninitialized-move","pushedAt":"2024-01-24T09:40:35.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"avikivity","name":"Avi Kivity","path":"/avikivity","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1017210?s=80&v=4"},"commit":{"message":"circular_buffer_fixed_capacity: use std::uninitialized_move() instead of open-coding\n\nThe code even contains a comment about it, so standardize the code.","shortMessageHtmlLink":"circular_buffer_fixed_capacity: use std::uninitialized_move() instead…"}},{"before":null,"after":"0e528c37b75a4875c71e93c82542518956ddabc1","ref":"refs/heads/C++23","pushedAt":"2024-01-24T09:36:10.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"avikivity","name":"Avi Kivity","path":"/avikivity","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1017210?s=80&v=4"},"commit":{"message":"Update supported C++ standards to C++23 and C++20 (dropping C++17)\n\nC++23 has passed all the ballots [1]. We can therefore support it now\nformally. This means that C++17 is no longer supported, and C++20 is\nsupported until C++26 is released.\n\nThe primary benefit of C++23 support is that we can now use coroutines\nin core Seastar code, as all supported C++ versions have them.\n\n[1] https://lists.isocpp.org/std-proposals/2024/01/8880.php","shortMessageHtmlLink":"Update supported C++ standards to C++23 and C++20 (dropping C++17)"}},{"before":null,"after":"dadf66f8a4b1c77f0d3ca523587f7006c1054154","ref":"refs/heads/memory-include-concepts","pushedAt":"2024-01-02T15:52:55.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"avikivity","name":"Avi Kivity","path":"/avikivity","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1017210?s=80&v=4"},"commit":{"message":"memory: include concepts.hh\n\nFixes build with modules.","shortMessageHtmlLink":"memory: include concepts.hh"}},{"before":"27a49965876fe984ccb57d2f077ef188b45f70a0","after":"6418734493ea67e31c32cbc5546724113778a41e","ref":"refs/heads/debug-less-polling","pushedAt":"2023-12-19T19:00:50.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"avikivity","name":"Avi Kivity","path":"/avikivity","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1017210?s=80&v=4"},"commit":{"message":"reactor: poll less frequently in debug mode\n\nUsually, seastar loops will try to run in one task, until the task\nquota expires. This avoids the overhead of breaking out into the\nreactor and running run_some_tasks(). To avoid going over the task\nquota and generating high latency, these loops check the need_preempt()\nfunction to see whether the task quota expired.\n\nSince many loops often do not preempt, the code paths where they do\nare a source of bugs. To root them out, need_preempt() is defined in\ndebug mode as always true. As a result, every loop will immediately\nbreak into the scheduler, which also checks need_preempt() to see if\nnew I/O has arrived and needs servicing (perhaps by a high priority\nscheduling_group). The end result is that every loop body execution\nis accompanies by a reactor poll, which is an expensive operation\ndesigned to happen every 0.5ms, not on every vector element (or\nwhatever is being processed in the loop).\n\nTo fix this, we split need_preempt() into two functions: the\noriginal need_preempt(), which tells loops whether they need to preempt,\nand a new internal::scheduler_need_preempt(), which tells the scheduler\nwhether it needs to poll for I/O. Outside debug mode, they are the same,\nsince in both cases the goal is to limit latency to the task quota.\nIn debug mode, need_preempt() always returns true (to check the preemption\ncode paths in loops), while scheduler_need_preempt() returns true\nevery 64 calls (to allow for some I/O polling without destroying loop\nefficiency).\n\nThis improves debug mode performance by more than 2X in some tests [1].\n\nSince we play with need_preempt, the stall detector tests break. Disable\nthem in debug mode, they're aimed at production anyway.\n\n[1] https://github.com/scylladb/scylladb/issues/16470","shortMessageHtmlLink":"reactor: poll less frequently in debug mode"}},{"before":"33f3a4d9131b65d0887d28a47cfa6964bcf08c32","after":"27a49965876fe984ccb57d2f077ef188b45f70a0","ref":"refs/heads/debug-less-polling","pushedAt":"2023-12-19T18:15:36.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"avikivity","name":"Avi Kivity","path":"/avikivity","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1017210?s=80&v=4"},"commit":{"message":"reactor: poll less frequently in debug mode\n\nUsually, seastar loops will try to run in one task, until the task\nquota expires. This avoids the overhead of breaking out into the\nreactor and running run_some_tasks(). To avoid going over the task\nquota and generating high latency, these loops check the need_preempt()\nfunction to see whether the task quota expired.\n\nSince many loops often do not preempt, the code paths where they do\nare a source of bugs. To root them out, need_preempt() is defined in\ndebug mode as always true. As a result, every loop will immediately\nbreak into the scheduler, which also checks need_preempt() to see if\nnew I/O has arrived and needs servicing (perhaps by a high priority\nscheduling_group). The end result is that every loop body execution\nis accompanies by a reactor poll, which is an expensive operation\ndesigned to happen every 0.5ms, not on every vector element (or\nwhatever is being processed in the loop).\n\nTo fix this, we split need_preempt() into two functions: the\noriginal need_preempt(), which tells loops whether they need to preempt,\nand a new internal::scheduler_need_preempt(), which tells the scheduler\nwhether it needs to poll for I/O. Outside debug mode, they are the same,\nsince in both cases the goal is to limit latency to the task quota.\nIn debug mode, need_preempt() always returns true (to check the preemption\ncode paths in loops), while scheduler_need_preempt() returns true\nevery 64 calls (to allow for some I/O polling without destroying loop\nefficiency).\n\nThis improves debug mode performance by more than 2X in some tests [1].\n\n[1] https://github.com/scylladb/scylladb/issues/16470","shortMessageHtmlLink":"reactor: poll less frequently in debug mode"}},{"before":null,"after":"33f3a4d9131b65d0887d28a47cfa6964bcf08c32","ref":"refs/heads/debug-less-polling","pushedAt":"2023-12-19T17:32:52.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"avikivity","name":"Avi Kivity","path":"/avikivity","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1017210?s=80&v=4"},"commit":{"message":"reactor: poll less frequently in debug mode\n\nUsually, seastar loops will try to run in one task, until the task\nquota expires. This avoids the overhead of breaking out into the\nreactor and running run_some_tasks(). To avoid going over the task\nquota and generating high latency, these loops check the need_preempt()\nfunction to see whether the task quota expired.\n\nSince many loops often do not preempt, the code paths where they do\nare a source of bugs. To root them out, need_preempt() is defined in\ndebug mode as always true. As a result, every loop will immediately\nbreak into the scheduler, which also checks need_preempt() to see if\nnew I/O has arrived and needs servicing (perhaps by a high priority\nscheduling_group). The end result is that every loop body execution\nis accompanies by a reactor poll, which is an expensive operation\ndesigned to happen every 0.5ms, not on every vector element (or\nwhatever is being processed in the loop).\n\nTo fix this, we split need_preempt() into two functions: the\noriginal need_preempt(), which tells loops whether they need to preempt,\nand a new internal::scheduler_need_preempt(), which tells the scheduler\nwhether it needs to poll for I/O. Outside debug mode, they are the same,\nsince in both cases the goal is to limit latency to the task quota.\nIn debug mode, need_preempt() always returns true (to check the preemption\ncode paths in loops), while scheduler_need_preempt() returns true\nevery 64 calls (to allow for some I/O polling without destroying loop\nefficiency).\n\nThis improves debug mode performance by more than 2X in some tests [1].\n\n[1] https://github.com/scylladb/scylladb/issues/16470","shortMessageHtmlLink":"reactor: poll less frequently in debug mode"}},{"before":"fe4ca1ccb2df90b043319fe8ece9a640b6d9f232","after":"ba51f097358922d4ac8f5c37f0150eaaacda8bb8","ref":"refs/heads/bump-docker-baseline","pushedAt":"2023-12-05T15:04:50.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"avikivity","name":"Avi Kivity","path":"/avikivity","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1017210?s=80&v=4"},"commit":{"message":"docker: bump up to clang {16,17} and gcc {12,13}\n\nsince we only support the latest two major releases of compilers.\nat the moment of writing, Clang just released v17, and the latest\nmajor release of GCC is v13.\n\nso we should install them respectively.\n\nbecause ubuntu kinetic does not ship clang-16. we need to bump up\nthe base image from ubuntu:kinetic to ubuntu:mantic despite that\nmantic is not an LTS release. we should use ubuntu 24.04 once\nit's out.\n\nsee also 80969ef9ffc10bb219bd3ef83ab76c2c536de7ec, which bumped\nthe compilers also.\n\nSigned-off-by: Kefu Chai \n\nCloses scylladb/seastar#1630\n\n[avi: regenerate image]","shortMessageHtmlLink":"docker: bump up to clang {16,17} and gcc {12,13}"}},{"before":null,"after":"ce29f039a011c23a0c4590882da9a797c106afac","ref":"refs/heads/dpdk-internal-poller","pushedAt":"2023-12-04T18:14:53.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"avikivity","name":"Avi Kivity","path":"/avikivity","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1017210?s=80&v=4"},"commit":{"message":"dpdk: adjust for poller in internal namespace\n\nIn be5fe12ebda1eacdf64cc5ee95664dd43f4b7e4d, we moved the poller class\nto the internal namespace, but did not adjust dpdk.cc. This corrects\nthe problem.","shortMessageHtmlLink":"dpdk: adjust for poller in internal namespace"}},{"before":null,"after":"fe4ca1ccb2df90b043319fe8ece9a640b6d9f232","ref":"refs/heads/bump-docker-baseline","pushedAt":"2023-11-02T16:33:47.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"avikivity","name":"Avi Kivity","path":"/avikivity","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1017210?s=80&v=4"},"commit":{"message":"docker: bump up to clang {15,16} and gcc {12,13}\n\nsince we only support the latest two major releases of compilers.\nat the moment of writing, Clang just released v16, and the latest\nmajor release of GCC is v13.\n\nso we should install them respectively.\n\nbecause ubuntu kinetic does not ship clang-16. we need to bump up\nthe base image from ubuntu:kinetic to ubuntu:lunar despite that\nlunar is not an LTS release. we should use ubuntu 24.04 once\nit's out.\n\nsee also 80969ef9ffc10bb219bd3ef83ab76c2c536de7ec, which bumped\nthe compilers also.\n\nSigned-off-by: Kefu Chai \n\nCloses scylladb/seastar#1630\n\n[avi: update toolchain]","shortMessageHtmlLink":"docker: bump up to clang {15,16} and gcc {12,13}"}}],"hasNextPage":true,"hasPreviousPage":false,"activityType":"all","actor":null,"timePeriod":"all","sort":"DESC","perPage":30,"cursor":"djE6ks8AAAAETZlqxwA","startCursor":null,"endCursor":null}},"title":"Activity · avikivity/seastar"}