Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

faster __notify implementation #182

Open
wants to merge 2 commits into
base: jordan/refactor-everything!
Choose a base branch
from

Conversation

lyonlai
Copy link

@lyonlai lyonlai commented Oct 31, 2015

I've been using the RC branch to tune the performance of our app and found the performance in the current __notify is a bit slow.

The current notify work flow is like the following

  • grab every observer from the dirty stores, unified the ids in a set
  • evaluate, and compare the difference of the getter value for each of them
  • call the handler if the value changed.

The situation in our app is that we have quite some numbers of observers alive and some of them share the same getter. Not to mention that some of the getter computation takes some time too. Although the new getter value is cached, it still takes time down on the tablet for the current process of __notify.

This pull request tried to group the observer handlers by getter, which results in the following flows

  • grab every observer from the dirty stores, unified the ids in a set
  • group the handlers by getters
  • for each getter, evaluate and compare the difference of the getter value
  • if the value is changed, notify all handlers that's listening to it.

The result in our app is quite good after that. Don't know if I breaks anything that I don't know. But for what I see at the moment it's doing its job.

Thoughts?

…roup the notifier by getters,

evaluate once then notify all.
@mindjuice
Copy link

Curious to know what your before and after numbers were like.

For me, the notify got much faster after I ensured that my data was fully Immutable all the way down. I had previously intentionally tried to avoid going "deep" with Immutable, thinking I was saving time on the conversion to and from immutable/JS.

Now my code actually just works with the Immutable objects and lists directly, and my notify times went from 2500ms to about 50ms.

@lyonlai
Copy link
Author

lyonlai commented Oct 31, 2015

@mindjuice, The before and after number for just __notify function went down from 360ms to around 60ms, which makes quite some difference.

Going down deep with immutable is a good decision. I and my team are enjoying it.

@bhamodi
Copy link
Contributor

bhamodi commented Oct 31, 2015

Interesting approach! @jordangarcia - check this out.

@jordangarcia
Copy link
Contributor

@lyonlai This is a great optimization!

For a specific dispatch cycle each we should only be evaluating a getter exactly once, the new implementation of the Evaluator actually has optimized this further and caches each getter by the storeState of all the dependent stores, so if the underlying stores change we get a cache hit.

This PR actually surfaced a bug, where

const currEvaluateResult = fns.evaluate(this.reactorState, getter)

returns a new reactorState which contains the cached value for the getter, however it's not being set on this.reactorState until the very end, meaning all getters tied to observers won't be cached.

the code should read:

const prevEvaluateResult = fns.evaluate(this.prevReactorState, getter)
const currEvaluateResult = fns.evaluate(this.reactorState, getter)

this.prevReactorState = prevEvaluateResult.reactorState
this.reactorState = currEvaluateResult.reactorState

I will update the RC branch with this optimization, if you could test it and make sure it has the same results as yours then I will go with that. Otherwise it's worth exploring if simply bypassing the existing caching mechanism and grouping by getter is more optimal.

Thanks!

@lyonlai
Copy link
Author

lyonlai commented Nov 1, 2015

@jordangarcia , great, seems like you've found and fixed the underlying issue of the caching mechanism inside the __notify

I've pull down your new changes and tested on my environment. With your update, yes, it greatly reduces the execution time down similar to the figures I've posted.

Then I put the caching fixes to this PR and tested against my environment, and found it's faster than before, and now it's still slightly faster than your branch. In my running environment, there's still around 20ms difference between those two approaches. The time of __notify for my updated approach is around 45ms average and yours is around 65ms average (I've run the test few times in chrome and had this result)

I think it really worth exploring why the grouping getter way is faster than the current way. I've got some thoughts.

Given the environment where you have n notifiers, and there is a total number of m unique getters required by those notifiers. Normally it will be m <= n.

For the current approach, where it evaluate every getter for the notifiers. It's an n evaluate + n notify.
For the grouping approach, it is m evaluate + n notify.

it really depends on how big the difference between the n & m, how much time does a cached evaluate cost down on the tablet environment and also how much time does it cost to build the map.

in my test case, I have n = 67, m = 13. So it saved 54 eval calls and Immutable.is. That's how it makes a difference down on the tablet device. I think the difference will grow with our app. By then I think the grouping approach will gain more. This is quite important since our app is targeting the tablet platform, which is quite extreme.

In the very extreme case, where n = m, that means no common shared getters among them, there will be no benefit in the grouping approach, plus it pays the prices of grouping the notifiers together.

The grouping approach is not totally by passing the caching mechanism, actually it still benefit from it.

Say there are a few getters, A, B, C. their dependency graph looks like this

A <--
        |<-- C
B <--

A and B both depends on C. And given the computation of C is heavy.

And there are observer groups E and F are listening for the changes from A and B, respectively.

First pass of the computation A will cache the value of C, then when it comes down to compute B, it fetches the C from the cache. So the caching is still beneficial in the notifying process.

Thanks.

@lyonlai
Copy link
Author

lyonlai commented Nov 1, 2015

Just on a second thought @jordangarcia , the observer grouping part can be moved from __notify to addObserver to save more time in __notify. So the algorithm now is purely depending on how big the difference between n & m.

I've got the branch ready. Tests shows it saves another 10ms in __notify. I'll open another PR to the RC branch

@bhamodi
Copy link
Contributor

bhamodi commented Dec 27, 2015

What's the status of this PR @lyonlai @jordangarcia?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants