Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Incremental reads for MachOFile and FatFile #23

Open
woodruffw opened this issue Feb 25, 2016 · 4 comments
Open

Incremental reads for MachOFile and FatFile #23

woodruffw opened this issue Feb 25, 2016 · 4 comments

Comments

@woodruffw
Copy link
Member

Right now, MachOFile.new and FatFile.new read entire binaries into memory. This is efficient when
manipulating their contents, but is unnecessarily expensive when testing the file's sanity
(good magic, reasonable size, etc). As a result, testing large numbers of Mach-O files with exception
handling is unnecessarily slow (when using MachOFile or FatFile directly).
#22 circumvents this problem when using the generic MachO.open method, but the Mach-O type classes should also do this individually.

I'm assigning this to myself, but it's not particularly high on the priority list (Homebrew uses MachO.open only).

@UniqMartin
Copy link
Contributor

I wonder if you have pondered this a little more. I think an easy thing to do would be to read just the first few pages (4096-byte units) of a Mach-O file. You could first just read the Mach header and then inspect its sizeofcmds field, then round that to the next page boundary and just read that part.

You could then manipulate that chunk and overwrite only that portion when syncing the changes to the same file or fetch the rest if writing to a new file. But it feels like it could make sense to optimize the read-only case a bit, as I think this is the more relevant one.

Or am I overlooking something where such an approach would massively complicate the code?

@woodruffw
Copy link
Member Author

I have, yeah.

Incremental reads would certainly be beneficial in the read-only case (and probably wouldn't add too much complexity), but I'm not so sure about writing. I'm going to experiment a bit locally and see if I can come up with a solution that improves our read performance statistics without complicating I/O significantly - I like that ruby-macho currently only performs one read (two, counting MachO.open) per binary.

@UniqMartin
Copy link
Contributor

I also like the simplicity of a single read. But it seems wasteful to read the whole binary (sometimes many megabytes) just to extract the first few kilobytes, though that's probably less noticeable nowadays thanks to very fast SSDs. But that's just a gut feeling and not supported by any numbers, thus it would be interesting if you can come up with some statistics. But was just curious; this is really low priority.

@woodruffw
Copy link
Member Author

But it seems wasteful to read the whole binary (sometimes many megabytes) just to extract the first few kilobytes

Absolutely agree, and relying on hardware (SSDs) probably isn't sustainable. A good solution might be deferring the majority of the read until the first write call/other call that requires inspection beyond sizeofcmds. I'll see what I can come up with 👍

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants