Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make use of multiple CPU cores #384

Open
sa666666 opened this issue Jun 6, 2020 · 7 comments
Open

Make use of multiple CPU cores #384

sa666666 opened this issue Jun 6, 2020 · 7 comments

Comments

@sa666666
Copy link

sa666666 commented Jun 6, 2020

The docs state that scanmem is slow on large programs on the first pass. I've experienced this too. Would it be possible to partition the area to search, and divide it amongst multiple cores? I have an 8-core, 16-thread machine, where the rest is going to waste, as scanmem is only using one core. How feasible would this be?

@12345ieee
Copy link
Member

12345ieee commented Jun 7, 2020

How feasible would this be?

It's clearly doable, but needs a huge amount of work.

I'll leave this open, maybe someone young and brave wants to try.

@shenada
Copy link
Contributor

shenada commented Jun 7, 2020 via email

@12345ieee
Copy link
Member

While I could write down a nice document on how the pieces fit, it'd take me ages and I really don't want to do that kind of work.

We can discuss this on a chat, sm has a (mostly desert) slack room here: Chat on Slack
Or if you like Discord more, I've been crashing at PINCE's place lately, it's a bit more lively: https://discord.gg/KCNDp9m

@shenada
Copy link
Contributor

shenada commented Jun 7, 2020 via email

@12345ieee
Copy link
Member

I'll just close it, nobody uses it anyway.

IRC is fine by me, but I'd rather you try discord first, because it keeps history.

@shenada
Copy link
Contributor

shenada commented Jun 7, 2020 via email

@bkazemi
Copy link
Member

bkazemi commented Sep 5, 2020

Well, not v0.18 for sure, but the easiest place to start would be to pthread region searching, as those are logically divided anyways. Then you'll have to synchronize access to the matches array which will be kind of a pain because we want the addresses to be ordered properly. Then there's the issue of progress output, it'd get jumbled without proper care (or ncurses). Last thing is handling the user stopping the search prematurely, not difficult because we handle that in the main thread. Maybe ptrace() issues. Probably other things too ;-)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants