Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pass "double seconds_passed" to the acter rendering function? #208

Open
hartwork opened this issue Jan 23, 2023 · 5 comments
Open

Pass "double seconds_passed" to the acter rendering function? #208

hartwork opened this issue Jan 23, 2023 · 5 comments

Comments

@hartwork
Copy link
Member

Hi!

I context of merged pull request #163 I started wondering, if on master we would should start passing an additional parameter double seconds_passed to the render function, so that actor plugins can just use that rather than inventing their own "how long ago was the last frame" version of stopwatch. What do you think?

Best, Sebastian

@kaixiong
Copy link
Member

@hartwork I'm skeptical of how useful that is. I would prefer to pass a 'global clock' time to all actors in a given pipeline.

From my experience, most state update code do not measure time deltas and extrapolate from there because that only works well if you have well parameterised state equations. This is not always easy or possible, so most state updates are designed to work in fixed intervals, frame by frame.

@hartwork
Copy link
Member Author

@hartwork I'm skeptical of how useful that is.

At least two actors are doing something like that, I can make a list, if you like.

I would prefer to pass a 'global clock' time to all actors in a given pipeline.

Might be nice in addition but maybe that's something we can provided via a call to an API function rather than passing every time. Also, just a clock would still need boilerplate work to figure one when you last "looked at the clock".

From my experience, most state update code do not measure time deltas and extrapolate from there because that only works well if you have well parameterised state equations. This is not always easy or possible, so most state updates are designed to work in fixed intervals, frame by frame.

We can quantify these two buckets and see if "most" is that bucket or the other, both currently and after adjustment (remembering lv_gltest).

@kaixiong
Copy link
Member

@hartwork, yes it would be good to quantify. I certainly would be quite surprised to learn otherwise because updating state based on a variable interval is always more work than a fixed interval.

It would also be good to look at what visualisations outside of LV do.

I don't find calling an API function just to retrieve time delta ergonomic. Come to think of it, another possibility is to pass the last frame time and current frame time together.

@hartwork
Copy link
Member Author

@hartwork, yes it would be good to quantify. I certainly would be quite surprised to learn otherwise because updating state based on a variable interval is always more work than a fixed interval.

@kaixiong I'm not sure I follow. Anything that rotates and is not bound to move in pixel steps is a good example of something that will work better (as in perceived constant speed) with time delta.
would also be good to look at what visualisations outside of LV do.

Please go ahead, but I think we already know the answer to that, just not the quantity. I'm not sure if counting who does it "wrong" will help us here.

I don't find calling an API function just to retrieve time delta ergonomic.

We were talking about access to a clock not access to time delta with regard to the API function. Big difference to me.

Come to think of it, another possibility is to pass the last frame time and current frame time together.

That's good enough for me, as the delta can be extracted trivially.
Bonus points if we can ensure that it's from a monotonic clock.

@kaixiong
Copy link
Member

@kaixiong I'm not sure I follow. Anything that rotates and is not bound to move in pixel steps is a good example of something that will work better (as in perceived constant speed) with time delta. would also be good to look at what visualisations outside of LV do.

I've brought this up elsewhere before - transformations like those you see in lv_gltest are very parameterisable with time - the state equations are relatively trivial to write down. Outside of that, it is not always easy or practical. For example, what do you do with actors that perform image filters e.g. convolutional effects like blurring the previous output?

Please go ahead, but I think we already know the answer to that, just not the quantity. I'm not sure if counting who does it "wrong" will help us here.

It's nothing to do with correctness per se but how expressible changes are in terms of a variable time delta. Which leads back to the question of the split among visualisations in their chosen approaches.

All else equal, fixed time deltas are just simpler and work with just about anything. Definitely if you can work with a variable time delta and have smooth transitions, it is best to do that.

We were talking about access to a clock not access to time delta with regard to the API function. Big difference to me.

I wasn't thinking of time deltas in particular - I just do not find 'callouts' as ergonomic as receiving the parameters directly. It's also a bit more work when we have complex pipelines involving multiple actors and morphs that need synchronisation. In such scenarios, we have to work with a pipeline-wide clock and the fact that some actors will be called later than others even though they're technically rendering for the same 'frame time' and composited together.

Come to think of it, another possibility is to pass the last frame time and current frame time together.

That's good enough for me, as the delta can be extracted trivially. Bonus points if we can ensure that it's from a monotonic clock.

Cool, this sounds like the way forward.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants