Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

What's the status of this project? #88

Open
karwa opened this issue Mar 3, 2021 · 12 comments
Open

What's the status of this project? #88

karwa opened this issue Mar 3, 2021 · 12 comments

Comments

@karwa
Copy link

karwa commented Mar 3, 2021

Given the recent news about S4TF being archived, I wonder if anybody will continue working on this.

Would it perhaps be a good idea to seek out a new home for it?

@compnerd
Copy link
Contributor

Are there actual issues that need to be addressed and aren't being addressed?

@karwa
Copy link
Author

karwa commented Aug 15, 2021

I wouldn't mind adding some of these things, but I'm not sure if this repository is being maintained.

I think Swift could really use a better benchmarking library than XCTest, and this one is really good, so it might make sense to move it to the apple organisation alongside other Swift projects (swift-format, swift-collections, etc), so the community would have a new location to collaborate on improvements.

Perhaps it could even be merged with swift-collections-benchmark. They test different things, of course (scalability of collections vs. performance of code snippets), but together they could begin to form a more comprehensive benchmarking library.

@ktoso
Copy link

ktoso commented Aug 16, 2021

As far as outputs / visual are concerned it might be useful to be able to emit data in JMH style json files, this way we could leverage a lot of existing tooling, like this https://jmh.morethan.io/

@compnerd
Copy link
Contributor

@ktoso if google/benchmark supports that format, I think would be a good reason to support it here as well.

@karwa I think that graphical output is something that I would prefer not to be part of this project. We should instead generate the data in a format that can be consumed by other tools and be used for plotting and analysis.

@ktoso
Copy link

ktoso commented Aug 17, 2021

@ktoso if google/benchmark supports that format, I think would be a good reason to support it here as well.

not sure about google/benchmark, but it is the de facto standard format for benchmark results in the jvm ecosystem (and part of the jdk), the format is very boring, so I think we could easily support it :)

@compnerd
Copy link
Contributor

Hmm, happen to have a good reference for the format?

@ktoso
Copy link

ktoso commented Aug 18, 2021

I checked in with the primary maintainer, it isn't formally specified but has not changed since years: https://twitter.com/shipilev/status/1427889432451944449

On the page I linked there's example JSONs though if you just want a quick skim, it's a pretty simple format. E.g. "load single run example" is

[
	{
		"benchmark": "io.morethan.javabenchmarks.showcase.QuickBenchmark.sleep100Milliseconds",
		"mode": "avgt",
		"threads": 1,
		"forks": 1,
		"warmupIterations": 0,
		"warmupTime": "1 s",
		"warmupBatchSize": 1,
		"measurementIterations": 1,
		"measurementTime": "1 s",
		"measurementBatchSize": 1,
		"primaryMetric": {
			"score": 102.7422955,
			"scoreError": "NaN",
			"scoreConfidence": [
				"NaN",
				"NaN"
			],
			"scorePercentiles": {
				"0.0": 102.7422955,
				"50.0": 102.7422955,
				"90.0": 102.7422955,
				"95.0": 102.7422955,
				"99.0": 102.7422955,
				"99.9": 102.7422955,
				"99.99": 102.7422955,
				"99.999": 102.7422955,
				"99.9999": 102.7422955,
				"100.0": 102.7422955
			},
			"scoreUnit": "ms/op",
			"rawData": [
				[
					102.7422955
				]
			]
		},
		"secondaryMetrics": {}
	},
...

@compnerd
Copy link
Contributor

Hmm, so, I looked into google/benchmark, and it does have JSON format output support. I would rather have the same output style. If JMH is important to you, then I'd be open to the idea of a jq script to convert from the google/benchmark format to the JMH format.

@ktoso
Copy link

ktoso commented Aug 18, 2021

Why not allow for --format=...? The default format may of course be the google/benchmark one 👍

@compnerd
Copy link
Contributor

--format is already available, and json is one of the options :)

@ktoso
Copy link

ktoso commented Aug 19, 2021

Oh sorry I missed that then :)

@Sherlouk
Copy link

Sherlouk commented Aug 2, 2022

image

Not pretending it's clean, but by modifying your main.swift file you can get JMH output without too much effort.

As an MVP:

import Benchmark
import Foundation

var runner = BenchmarkRunner(
    suites: [
        movementBenchmarks,
    ],
    settings: parseArguments(),
    customDefaults: defaultSettings
)

try runner.run()

extension Array where Element == Double {
    var sum: Double {
        var total: Double = 0
        for x in self {
            total += x
        }
        return total
    }
    
    var mean: Double {
        if count == 0 {
            return 0
        } else {
            let invCount: Double = 1.0 / Double(count)
            return sum * invCount
        }
    }

    var median: Double {
        guard count >= 2 else { return mean }

        // If we have odd number of elements, then
        // center element is the median.
        let s = self.sorted()
        let center = count / 2
        if count % 2 == 1 {
            return s[center]
        }

        // If have even number of elements we need
        // to return an average between two middle elements.
        let center2 = count / 2 - 1
        return (s[center] + s[center2]) / 2
    }

    func percentile(_ v: Double) -> Double {
        if v < 0 {
            fatalError("Percentile can not be negative.")
        }
        if v > 100 {
            fatalError("Percentile can not be more than 100.")
        }
        if count == 0 {
            return 0
        }
        let sorted = self.sorted()
        let p = v / 100.0
        let index = (Double(count) - 1) * p
        var low = index
        low.round(.down)
        var high = index
        high.round(.up)
        if low == high {
            return sorted[Int(low)]
        } else {
            let lowValue = sorted[Int(low)] * (high - index)
            let highValue = sorted[Int(high)] * (index - low)
            return lowValue + highValue
        }
    }
}

extension Array {
    func chunked(into size: Int) -> [[Element]] {
        return stride(from: 0, to: count, by: size).map {
            Array(self[$0 ..< Swift.min($0 + size, count)])
        }
    }
}



let jmhResult: [[String: Any]] = runner.results.map { result in
    let chunks = Array(result.measurements.prefix(20 * 5)).chunked(into: 20)
    
    return [
        "benchmark": "\(result.suiteName).\(result.benchmarkName)",
        "mode": "avgt",
        "threads": 1,
        "forks": chunks.count,
        "measurementIterations": result.measurements.count,
        "measurementTime": "\(result.measurements.sum) \(result.settings.timeUnit.description)",
        "measurementBatchSize": 1,
        "warmupIterations": result.warmupMeasurements.count,
        "warmupTime": "\(result.warmupMeasurements.sum) \(result.settings.timeUnit.description)",
        "warmupBatchSize": 1,
        "primaryMetric": [
            "score": "\(result.measurements.median)",
            "scoreUnit": result.settings.timeUnit.description.trimmingCharacters(in: .whitespaces),
            "scorePercentiles": [
                "0.0": result.measurements.percentile(0),
                "50.0": result.measurements.percentile(50),
                "90.0": result.measurements.percentile(90),
                "95.0": result.measurements.percentile(95),
                "99.0": result.measurements.percentile(99),
                "99.9": result.measurements.percentile(99.9),
                "99.99": result.measurements.percentile(99.99),
                "99.999": result.measurements.percentile(99.999),
                "99.9999": result.measurements.percentile(99.9999),
                "100.0": result.measurements.percentile(100),
            ],
            "rawData": chunks
        ],
        "secondaryMetrics": []
    ]
}


let jmh = try String(decoding: JSONSerialization.data(
    withJSONObject: jmhResult,
    options: [.prettyPrinted, .withoutEscapingSlashes]
), as: UTF8.self)

let path = FileManager.default.currentDirectoryPath + "/\(UUID().uuidString).json"
try jmh.write(toFile: path, atomically: true, encoding: .utf8)
print("\nWritten JMH results to \(path)")

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants