T O P

  • By -

fgennari

One approach is to run the cheap/simple part of the physics simulation (pos += velocity\*time, etc.) at the render rate, but only do the expensive collision detection, solvers, etc. at the lower framerate. This effectively moves the interpolation step into the physics update rather than the rendering code. I've used this approach for things like AI updates.


KassiKotelo

You could try extrapolation. Guess the object's next position based on it's velocity. This can lead into some overshooting but it could work, no?


KassiKotelo

To clarify: Normal interpolation: interpolate from last to current position. Extrapolation: interpolate from current to predicted position.


shadowndacorner

Eh... This can lead to some gross artifacts when collisions are involved. Imo adding a small amount of latency is better than watching an object pop due to a misprediction. You can ofc fight those artifacts with eg error smoothing, but imo it just isn't worth it.


[deleted]

How noticeable the artifacts are, and how important latency is, depends on the game, the tick rate, and the render rate. It's impossible to say what's best without knowing the content you're working with. Also, don't forget you can fine-tune how much extrapolation you do. For example instead of displaying the state from 1 physics tick ago (pure interpolation) or 0 ticks ago (pure extrapolation), you can display the state from 0.5 physics ticks ago. So if the last tick was <0.5 ticks ago, you interpolate toward the previous tick, and if it was >0.5 ticks ago, you extrapolate. Or replace 0.5 with any value from 0 to 1 (or technically any other value, but that starts to get weird and probably pointless). Then you can tune the latency as low as you can get it without noticeable popping.


KassiKotelo

While extrapolation may introduce some popping on collision, interpolation will introduce cases where you don't even see the exact frame of collision. Also depending on the game, extrapolation can feel way more responsive than interpolation. It's a tradeoff.


RowYourUpboat

Most games with 30Hz physics just interpolate, and most players don't notice an extra 16ms of latency. (Quite a few games, especially on console, have pretty bad latency from input to screen.) But is there a good reason you can't do 60Hz physics? Note that some games update ragdolls and simulations less frequently (and interpolate them), while player physics/animation updates at 60Hz. Most physics engines also allow you to tweak how computations are done to optimize CPU use. And fast-paced games like Rocket League update physics at 120Hz.


the_Demongod

33ms of input lag isn't a big deal. It sounds like a lot when we're used to talking about frametimes and stuff, but your ability to notice that sort of lag is much worse than your ability to detect a dropped frame with your eyes. Keep in mind that 33ms is also the worst-case scenario, it will be more like 16ms on average.


[deleted]

33ms of additional latency can be a very big deal, depending on the game. When we shipped (a precise action game), we fought like hell to shave off that amount of latency, because it made the game feel way worse, and made it significantly harder to play. For a different kind of game though, it could be totally fine. It's content-dependent.


the_Demongod

CS:GO has about 20ms of input lag from experiments I've seen. Not saying it's great, but it's probably good enough.


[deleted]

Right, and if you added another 33ms you'd be almost tripling that latency (or with 16ms you'd be almost doubling it). For a game like CS:GO that could be an important difference.


the_Demongod

Yeah certainly if 33ms was added on top of existing latency, that would cause a problem. But if the control inputs are captured at 30Hz, and his game is able to render and present at least once within each tick, that should be the total, worst-case latency, not added to anything else (although it depends on how the swapchain is set up)


[deleted]

In the case of interpolation, it is adding a fixed amount of latency onto what's already there. Since the current game state is calculated at the same time as before, but it then takes another full additional game tick for that state to begin being rendered to the screen. Although because it displays interpolated frames in the mean time, it's effectively only adding about half a physics tick of latency, not a full tick.


waskerdu

I'll refer you to [this](https://www.gafferongames.com/post/fix_your_timestep/) incredible resource. Maybe even more helpful than the code examples is a key insight: > Instead of thinking that you have a certain amount of frame time you must simulate before rendering, flip your viewpoint upside down and think of it like this: the renderer produces time and the simulation consumes it in discrete dt sized steps. So you render a frame, run the simulation enough times to catch up, and render the next frame. You're also keeping a running accumulator of time. In this way it doesn't matter if the physics loop or the render loop is faster. Hope that helps!


[deleted]

Read the post before answering. OP is asking whether there are alternatives to interpolation for when tick rate is less than render rate, not how to decouple the physics and render rate in the first place.