The post Go: The Surrounding Game appeared first on gafferongames.com.
Go: The Surrounding Game
All Ready for GDC 2013
I’m speaking Tuesday 10AM at the Physics Tutorial Day. See you at GDC!
The post All Ready for GDC 2013 appeared first on gafferongames.com.
Virtual Go Talk Slides from GDC 2013
My talk is finished and I’m very relieved and proud. Thanks for being a great audience!
The slides are available in PDF and Keynote.
Please note that my presentation style is completely visual + ad-lib so there are no slide notes.
For details, please refer to the article series supporting the talk.
Thanks again and see you at GDC 2014!
The post Virtual Go Talk Slides from GDC 2013 appeared first on gafferongames.com.
Encouragement
Hello, my name is Kyle Gagner. I stumbled across your blog looking for information on how to make a physics engine, but I found something much more interesting, the game Go. I was completely unaware of Go until then, and now I’m completely obsessed with it. Impatient as ever, I played my first three games online, and my fourth game on a grid drawn on paper with bits of play-dough with a friend of mine at a science fair. After that, I decided to write my own online version of the game. It’s a convenient way to play, especially since I don’t yet own a Go board, but I am much anticipating whatever might come of your Virtual Go project, which promises to be orders of magnitude better. So, I suppose I’m simply writing to thank you for introducing me to go and to express my interest in your project. If there’s anything I can do to help the project, just let me know.
Thank you so much Kyle. I’m so happy I helped you discover Go. Go is awesome!
The post Encouragement appeared first on gafferongames.com.
Virtual Go iPad Version: Work In Progress!
If you are a Go player with an iPad 2, 3 or 4 and would like to beta test Virtual Go
Click here to join the beta test group
The post Virtual Go iPad Version: Work In Progress! appeared first on gafferongames.com.
Virtual Go BETA – Work in Progress!
The post Virtual Go BETA – Work in Progress! appeared first on gafferongames.com.
Today I resigned from Sony Santa Monica and accepted a position at Respawn Entertainment
While I’m sad to say goodbye to my friends at Sony, I’m excited to be starting work at Respawn on Titanfall.
A few things interested me at Respawn, beyond the obvious points of an incredibly talented team and a ridiculously kickass game running at 60fps.
Specifically this article by Jon Shiring, network programmer at Respawn:
http://www.respawn.com/news/lets-talk-about-the-xbox-live-cloud/
As I read this article quite a few things rang true with my experience with game networking. I very quickly came to the conclusion that peer-to-peer simply cannot compete with the matchmaking speed/quality and in-game network performance of dedicated servers. Sure you can throw money and time at it but ultimately you run into limiting factors such as poor internet connections, varying network conditions and time to NAT. And even if you could solve it, if everybody had fiber optic NAT 1s, you still run into cheating issues, lag switches and waste a lot of time handling host migration.
Wouldn’t it be nice if instead of doing all this crap you could just focus on making a really awesome game? And if this game had excellent matchmaking performance and speeds because it didn’t need to wait around and run QoS queries to match players together? If you didn’t have to worry about host migration because the host was always there? Even better what if you could perform all the physics and AI functionality on the server so the box can optimize to focus on rendering, making stuff look amazing and one of the boxes isn’t overloaded with extra CPU cost to act as the server?
I found myself in the unique position of a P2P networking expert seeing much of my work over the last few years obsoleted. I also came to an understanding of precisely how revolutionary dedicated cloud servers will be for competitive multiplayer gaming on consoles.
So when you find yourself with an opportunity to join such a team, you take it
I’ll not be very active on here for the next year.
See you guys on the other side of Titanfall.
Glenn Fiedler
glenn.fiedler@gmail.com
The post Today I resigned from Sony Santa Monica and accepted a position at Respawn Entertainment appeared first on gafferongames.com.
Titanfall Released on X1, PC and 360. I’m Back!
Titanfall is successfully launched on X1, PC and 360.
Reviews are good and it’s an honor to be a part of such a talented team.
The post Titanfall Released on X1, PC and 360. I’m Back! appeared first on gafferongames.com.
Protected: Networked Physics
The post Protected: Networked Physics appeared first on Gaffer on Games.
Introduction to Networked Physics
Hi, I’m Glenn Fiedler and welcome to my new article series: Networked Physics.
I’ve been doing research into different approaches for networking physics simulations since my first article Networked Physics in 2006. Almost 10 years later I’m proud to present what I hope will become the definitive resource for game developers who need to network a physics simulation.
My approach in this article series will be not to just present one technique declaring it “the best way”. If there is one thing I have learned in the past ten years it is that there no such thing! Instead, I will provide a number of alternative techniques with pros and cons so YOU can decide which technique is best for your game.
So if you have a physics simulation you need to network and you don’t know where to start. Start here! This article series is designed for you.
First we’ll quickly cover networking basics. What you can expect from the internet when you network your physics simulation. I’ll keep it light and quickly explain why you should use UDP not TCP, how to handle packet loss and out of order packets, what sort of latency and jitter you can expect, what packet send rate to use and how much bandwidth is available.
Next I’ll introduce the physics simulation we’ll use for this article series: a simulation where the player controls a cube and can roll around, pushing and blowing other smaller cubes around the world. The player can even roll around and form a physically simulated katamari out of cubes!
Using this demo we will explore three different techniques for synchronizing a physics simulation:
- Deterministic lockstep
- Snapshots and interpolation
- Stateful synchronization
Various bandwidth optimization techniques will be explored including compression of position, orientation, linear and angular velocity, delta encoding and priority accumulator techniques to take each case study from toy implementation to something ready for real world use over the internet.
Next the question of topology is addressed. Should your physics simulation be networked client/server or peer-to-peer? What are the pros and cons of each approach? Issues such as cheating, NAT traversal, bandwidth usage and host migration are explored in detail.
Rounding out the article series we explore eight different network model implementations:
- Client/Server (Interpolation)
- Client/Server (Client side prediction)
- Deterministic Lockstep (Peer-to-peer)
- Deterministic Lockstep (Client/Server)
- Deterministic Lockstep (Client side prediction)
- Distributed Simulation (Peer-to-peer)
- Distributed Simulation (Client/server)
- Distributed Simulation (Non-simulating server)
Now that’s a whole lot to cover, so lets get started!
First up: Networking 101.
If you enjoyed this article please consider making a small donation. Donations encourage me to write more articles!
The post Introduction to Networked Physics appeared first on Gaffer on Games.
Networking 101
Hi, I’m Glenn Fiedler and welcome to Networking 101.
The first thing you need when networking a physics simulation is a basic understanding of how the internet works and what sort of behavior you can expect when you try to send packets over it. This article will provide that quick overview, drawing from my past 10 years shipping online games (Tribes: Vengeance, Mercenaries 2, Journey, Playstation: All-Stars, God of War: Ascension and most recently, Titanfall).
I’ll be brief here as to not get bogged down in details. Really all you need to know about the internet is at the lowest level it works by sending packets from one computer to another, but not directly. Your packets hop from one computer to another along a route in order to get to their destination.
The important thing to understand here is that depending on the route your packets take, if any of those computers in the route experience congestion they may drop a packet, or more likely buffer your packet for a long time trying to ensure it is delivered.
Generally this happens only when you try to exceed the capacity of the route. At other times it just happens randomly, but there is nothing you can do about that, except to show a “bad connection” icon in the top-right corner of your screen (highly recommended).
How to best work within the capabilities of this route which you are not in control of?
The most important thing you can do is only send time critical data over UDP.
The reason for this is that UDP is the lowest-level internet protocol and it works as a very thin layer on on top of IP. By using UDP you gain access to the way the internet really works: packet based, unreliable, unordered delivery.
I recommend you don’t use TCP for time critical data. Of course feel free to use it where appropriate (eg. REST calls when interfacing with HTTP servers and so on), but when sending data which is time critical (basically all your data when networking a physics simulation…), you should be aware the abstraction that TCP provides gets in the way in the presence of packet loss and latency.
Why is this so? Under packet loss TCP buffers more recent packets (frames) received until it receives the resend of a lost packet. It does this so can present this data reliably and in-order to the receiver. This means that even if more recent data has arrived, TCP has to stop and wait for this lost data to be resent.
This is the whole point of TCP, creating an abstraction of reliable-ordered data delivery. But this is exactly what you don’t want for a time critical protocol. If your data is time critical and a packet is lost, so what! It’s dropped. You don’t have time to stop and wait for that packet to be resent. You want to skip over that lost packet and process the next packet that arrives. This is what UDP lets you do and this is why you want to use UDP.
Importantly just so you guys don’t think I’m going too far with the UDP thing. If no packet loss exists then be aware that TCP is just fine. If you are just starting out developing your own network protocol over a LAN. You can start using TCP there if you want, practically it won’t make any difference (there is virtually no packet loss over a LAN and no latency).
Just be aware that once packet loss AND latency exist over your route, you’re going to want to convert to UDP to get the best network performance for your time critical data. For more details please refer to my article: UDP vs. TCP
Moving forward, here are the absolute basics of IP (Internet Protocol), which correspond directly to how UDP works:
- The internet is packet based, and packet switched (packets hop along a route)
- The internet is unreliable (packet loss may occur, packets may arrive out of order or even arrive in duplicate)
- The internet promises best effort delivery only
What this means, in theory, is that when sending packets over the internet you have absolutely no guarantee. You could have 100% packet loss, or packets could be delivered in a completely random order. It’s easy to get hung up on worst cases here, and they can make you incredibly paranoid as a network programmer.
The key thing is to understand that such behavior is incredibly unlikely and if it does happen, really it’s not your fault or responsibility to handle completely messed up network conditions (again, bad connection icon, top-right corner). All you can do is make sure you’re not the cause of them.
In practice what I’ve discovered is that typically (in 2014):
- Packet loss along the internet backbone is practically non-existent
- When interfacing with home internet connections, packet loss up to 5% is common
- When that user is playing over Wi-Fi you can get bursts of late packets or no packets delivered and higher packet loss overall (interference)
- I’m not even going to talk about how bad packet loss, latency and so on is over 3G/4G wireless. My advice is don’t bother!
- Home internet connections in the US typically have decent download (you can rely on around 256kbit/sec), but are upload constrained (I would aim for no more than 128kbit/sec up)
- If you exceed the available bandwidth for a link, very strange things start to happen. Rather than just dropping packets, routers attempt to buffer your flooding packets and deliver them all, jacking up your latency to several seconds. It would be really nice if we had a way to signal to routers not to do this but but I’m not aware of one existing with IPv4.
- Time variance in packet delivery does exist (jitter). For example, if you send packets 60 times per-second, you aren’t going to receive each packet spaced exactly 1/60th of a second apart on the other end. But it’s typically on the order of a few frames of jitter @ 60fps. You do need to be aware of it, but don’t be too concerned.
- On to packet send rate. Many games send packets at 10pps or 20pps. I’ve shipped games with low packet send rates, but I’ve also shipped games that send packets 60 times per-second. There is no problem having a high packet send rate if that is appropriate for your network model. Higher packet send rates can improve responsiveness, but also increase the % of your bandwidth spent on packet header overhead (typical rule of thumb I use is 32 bytes overhead per-packet in the real world)
- A lower quality network connection often is overloaded when somebody is trying to play your game, eg. watching netflix while gaming. It pays to be as conservative in bandwidth usage as possible because you’re usually not the only application sending traffic over the home link.
- Generally it pays to keep packets under 1200 byte MTU. These days this may be a bit overly conservative, but it is common for game engines to split up packets larger than this and perform their own fragmentation and reassembly on the receiving side. Also, some platforms especially game consoles have limits to maximum supported packet size, although long term with IPv6 and newer consoles this is less of a concern going forward than it has been in the past.
You may be wondering how to handle packet loss and detect such things over UDP. There is really no amazing trick to it. All you do when sending packets is include a sequence number at the top of your packet data, uniquely identifying that packet to the receiver. For example, packet 0, 1, 2, 3 etc. Building a simple reliability using these sequence numbers is not too hard, all you are really doing is communicating back to the sender which packet sequence numbers were received.
The details of this are specific to the networking technique you are using so I won’t drill too far down into reliability at this point. If you are interested in a general purpose reliability over UDP technique, please read these articles: Virtual Connection over UDP and Reliability and Flow Control.
One final note about packet delivery and routes. It should be made clear that the route packets take to get from A->B is not necessarily static. In fact, if you are wondering what a major source of duplicate and out of order packets is over IP, it is exactly this, a dynamic route change while you are sending a bunch of packets. The internet is dynamic, not static!
Also, there is no guarantee that the route packets take from B->A is the route from A->B in reverse. You can’t therefore assume that packet round trip time is exactly symmetrical: A->B could take longer than B->A or vice versa.
That’s it for Networking 101. You are now officially qualified to network a physics simulation!
Up next: The Physics Simulation
If you enjoyed this article please consider making a small donation. Donations encourage me to write more articles!
The post Networking 101 appeared first on Gaffer on Games.
Deterministic Lockstep
Hi, I’m Glenn Fiedler and welcome to Networked Physics, my article series on how to network a physics simulation.
In the previous article, we discussed the properties of the physics simulation we’re going to network. In this article we’ll network that physics simulation using the deterministic lockstep technique.
Deterministic lockstep is a method of synchronizing a system from one computer to another by sending only the inputs that control that simulation, rather than networking the state of the objects in the simulation itself. The idea is that given initial state S(n) we run the simulation using input I(n) to get S(n+1). We then take S(n+1) and input I(n+1) to get S(n+2), repeating this process for n+3, n+4 and so on. It’s sort of like a mathematical induction where we step forward with only the input and the previous simulation state – keeping the state perfectly in sync without ever actually sending it.
The main benefit of this network model is that the bandwidth required to transmit the input is independent of the number of objects in the simulation. You can network a physics simulation of one million objects with exactly the same amount of bandwidth as a simulation with just one. It’s easy to see that with the state of physics objects typically consisting of a position, orientation, linear and angular velocity (52 bytes uncompressed, assuming a quaternion for orientation and vec3 for everything else) that this can be an attractive option when you have a large number of physics objects.
To network your physics simulation using deterministic lockstep you first need to ensure that your simulation is deterministic. Determinism in this context has very little to do with free will. It simply means that given the same initial condition and the same set of inputs your simulation gives exactly the same result. And I do mean exactly the same result. Not near enough within floating point tolerance. Exact down to the bit-level. So exact you could take a checksum of your entire physics state at the end of each frame and it would be identical.
Above you can see a simulation that is almost deterministic but not quite. The simulation on the left is controlled by the player. The simulation on the right has exactly the same inputs applied with a two second delay starting from the same initial condition. Both simulations step forward with the same delta time (a necessary precondition to ensure exactly the same result) and apply the same inputs before each frame. Notice how after the smallest divergence the simulation gets further and further out of sync. This simulation is non-deterministic.
What’s going on above is that the physics engine I’m using (ODE) uses a random number generator inside its solver to randomize the order of constraint processing to improve stability. It’s open source. Take a look and see! Unfortunately this breaks determinism because the simulation on the left processes constraints in a different order to the simulation on the right, leading to slightly different results.
Luckily all that is required to make ODE deterministic on the same machine, with the same complied binary and on the same OS (is that enough qualifications?) is to set its internal random seed to the current frame number before running the simulation via dSetRandomSeed. Once this is done ODE gives exactly the same result and the left and right simulations stay in sync.
And now a word of warning. Even though the ODE simulation above is deterministic on the same machine, that does not necessarily mean it would also be deterministic across different compilers, a different OS or different machine architectures (eg. PowerPC vs. Intel). In fact, it’s probably not even deterministic between your debug and release build due to floating point optimizations. Floating point determinism is a complicated subject and there is no silver bullet. For more information please refer to this article.
Now lets talk about implementation.
You may wonder what the input in our example simulation is and how we should network it. Well, our example physics simulation is driven by keyboard input: arrow keys apply forces to make the player cube move, holding space lifts the cube up and blows other cubes around, and holding ‘z’ enables katamari mode.
But how can we network these inputs? Must we send the entire state of the keyboard? Do we send events when these keys are pressed and released? No. It’s not necessary to send the entire keyboard state, only the state of the keys that affect the simulation. What about key press and release events then? No. This is also not a good strategy. We need to ensure that exactly the same input is applied on the right side when simulating frame n, so we can’t just send ‘key pressed’, and ‘key released events’ across using TCP, as they may arrive earlier or later than frame n causing the simulations to diverge.
What we do instead is represent the input with a struct and at the beginning of each simulation frame on the left side, sample this struct from the keyboard and stash it away in a sliding window so we can access the input later on indexed by frame number.
struct Input { bool left; bool right; bool up; bool down; bool space; bool z; };
Now we send that input from the left simulation to the right simulation in a way such that that simulation on the right side knows that the input belongs to frame n. For example, if you were sending across using TCP you could simply send the inputs and nothing else, and the order of the inputs implies n. On the other side you could read the packets coming in, and process the inputs and apply them to the simulation. I don’t recommend this approach but lets start here and I’ll show you how it can be made better.
So lets say you’re using TCP, you’ve disabled Nagle’s algorithm and you’re sending inputs from the left to the right simulation once per-frame (60 times per-second).
Here it gets a little complicated. It’s not enough to just take whatever inputs arrive over the network and then run the simulation on inputs as they arrive because the result would be very jittery. You can’t just send data across the network at a certain rate and expect it to arrive nicely spaced out at at exactly the same rate on the other side (eg. 1/60th of a second apart). The internet doesn’t work like that. It makes no such guarantee.
If you want this you have to implement something called a playout delay buffer. Unfortunately, the subject of playout delay buffers is a patent minefield. I would not advise searching for “playout delay buffer” or “adaptive playout delay” while at work. But in short, what you want to do is buffer packets for a short amount of time so they appear to be arriving at a steady rate even though in reality they arrive somewhat jittered.
What you’re doing here is similar to what Netflix does when you stream a video. You pause a little bit initially so you have a buffer in case some packets arrive late and then once the delay has elapsed video frames are presented spaced the correct time apart. Of course if your buffer isn’t large enough then the video playback will be hitchy. With deterministic lockstep your simulation will behave exactly the same way. I recommend 100-250ms playout delay. In the examples below I use 100ms because I want to minimize latency added for responsiveness.
My playout delay buffer implementation is really simple. You add inputs to it indexed by frame, and when the very first input is received, it stores the current local time on the receiver machine and from that point on delivers all packets assuming that frame 0 starts at that time + 100ms. You’ll likely need to something more complex for a real world situation, perhaps something that handles clock drift, and detecting when the simulation should slightly speed up or slow down to maintain a nice amount of buffering safety (being “adaptive”) while minimizing overall latency, but this is reasonably complicated and probably worth an article in itself, and as mentioned a bit of a patent minefield so I’ll leave this up to you.
In average conditions the playout delay buffer provides a steady stream of inputs for frame n, n+1, n+2 and so on, nicely spaced 1/60th of a second apart with no drama. In the worst case the time arrives for frame n and the input hasn’t arrived yet it returns null and the simulation is forced to wait. If packets get bunched up and delivered late, it’s possibly to have multiple inputs ready to dequeue per-frame. In this case I limit to 4 simulated frames per-render frame to give the simulation a chance to catch up. If you set it much higher you may induce further hitching as you take longer than 1/60th of a second to run those frames (this can create an unfortunate feedback effect). In general, it’s important to make sure that you are not CPU bound while using deterministic lockstep technique otherwise you’ll have trouble running extra simulation frames to catch up.
Using this playout buffer strategy and sending inputs across TCP we can easily ensure that all inputs arrive reliably and in-order. This is what TCP is designed to do after all. In fact, it’s a common thing out there on the Internet for pundits to say stuff like:
I’m here to tell you this kind of thinking is dead wrong.
Above you can see the simulation networked using deterministic lockstep over TCP at 100ms latency and 1% packet loss. If you look closely on the right side you can see infrequent hitching every few seconds. I apologize if you have hitching on both sides that means your computer is struggling to play the video. Maybe download it and watch offline if that is the case. Anyway, what is happening here is that when a packet is lost, TCP has to wait RTT*2 before resending it (actually it can be much worse, but I’m being generous…). The hitches happen because with deterministic lockstep the right simulation can’t simulate frame n without input n, so it has to pause to wait for input n to be resent!
That’s not all. It gets significantly worse as the amount of latency and packet loss increases. Here is the same simulation networked using deterministic lockstep over TCP at 250ms latency and 5% packet loss:
Now I will concede that if you have no packet loss and/or a very small amount of latency then you very well may get acceptable results with TCP. But please be aware that if you use TCP to send time critical data it degrades terribly as packet loss and latency increases.
Can we do better? Can we beat TCP at its own game. Reliable-ordered delivery?
The answer is an emphatic YES. But only if we change the rules of the game.
Here’s the trick. We need to ensure that all inputs arrive reliably and in order. But if we just send inputs in UDP packets, some of those packets will be lost. What if, instead of detecting packet loss after the fact and resending lost packets, we just redundantly send all inputs we have stored until we know for sure that the other side has received them?
Inputs are very small (6 bits). Lets say we’re sending 60 inputs per-second (60fps simulation) and round trip time we know is going the be somewhere in 30-250ms range. Lets say just for fun that it could be up to 2 seconds worst case and at this point we’ll time out the connection (screw that guy). This means that on average we only need to include between 2-15 frames of input and worst case we’ll need 120 inputs. Worst case is 120*6 = 720 bits. That’s only 90 bytes of input! That’s totally reasonable.
We can do even better. It’s not common for inputs to change every frame. What if when we send our packet instead we start with the sequence number of the most recent input, and the 6 bits of the first (oldest) input, and the number of un-acked inputs. Then as we iterate across these inputs to write them to the packet we can write a single bit (1) if the next input is different to the previous, and (0) if the input is the same. So if the input is different from the previous frame we write 7 bits (rare). If the input is identical we write just one (common). Where inputs change infrequently this is a big win and in the worst case this really isn’t that bad. 120 bits of extra data sent. Just 15 bytes overhead worst case.
Of course another packet is required from the right simulation to the left so the left side knows which inputs have been received. Each frame the right simulation reads input packets from the network before adding them to the playout delay buffer and keeps track of the most recent input it has received by frame number, or if you want to get fancy, a sequence number of only 16 bits that handles wrapping. Then after all input packets are processed, if any input was received that frame the right simulation replies back to the left simulation telling it the most recent input sequence number it has received, e.g. an “ack” or acknowledgment.
When the left simulation receives this ack it takes its sliding window of inputs and discards any inputs older than the acked sequence number. There is no need to send these inputs to the right simulation anymore because we know it has already received them. This way we typically have only a small number of inputs in flight proportional to the round trip time between the two simulations.
We have beaten TCP by changing the rules of the game. Instead of “implementing 95% of TCP on top of UDP” we have implemented something quite different and better suited to our requirements: time critical data. We developed a custom protocol that redundantly sends all un-acked inputs that can handle large amounts of latency and packet loss without degrading the quality of the synchronization or hitching.
So exactly how much better is this approach than sending the data over TCP?
Take a look.
The video above is deterministic lockstep synchronized over UDP using this technique with 2 seconds of latency and 25% packet loss. In fact, if I just increase the playout delay buffer from 100ms to 250ms I can get the code running smoothly at 50% packet loss. Imagine how awful TCP would look in these conditions!
So in conclusion: even where TCP should have the most advantage, in the only networking model I’ll present to you in this article series that relies on reliable-ordered data, we can easily whip its ass with a simple custom protocol sent over UDP.
Up next: Snapshots and Interpolation (STATUS: Pending Donations!)
Networked Physics article series resumes once my hosting costs for 2015 are covered!
If you are enjoying the networked physics article series and would like to read more, please consider supporting my work with a small patreon donation.
The post Deterministic Lockstep appeared first on Gaffer on Games.
Protected: Snapshots and Interpolation
The post Protected: Snapshots and Interpolation appeared first on Gaffer on Games.
Protected: Stateful Synchronization
The post Protected: Stateful Synchronization appeared first on Gaffer on Games.
The Physics Simulation
Hi, I’m Glenn Fiedler and welcome to Networked Physics, my article series on how to network a physics simulation.
In the previous article Networking 101, we discussed the basics of game networking. In this article we’ll spend some time exploring the physics simulation we’re going to network in many different ways for the rest of this series.
First we need an object controlled by the player. Here I’ve setup a simple simulation of a cube in the open source physics engine ODE. The player can use the arrow keys to make the cube move around by applying force to its center of mass. The physics simulation takes this linear motion and calculates friction when the cube collides with the ground which induces the rolling and tumbling motion.
This tumbling is quite intentional and is why I chose a cube for this simulation instead of, say, a sphere. Rigid bodies in general move in non-linear ways according to their shape and how they respond to collision, friction and so on. It’s not possible to accurately predict the motion of this tumbling cube using a simple linear extrapolation or even the ballistic equations of motion. You have to run the whole physics simulation including collision response, friction and so on to determine how a rigid body moves.
This automatically rules out a few networking approaches such as extrapolation aka. dead reckoning. These approaches are typically used when predicting mostly linear motion such as the character proxy in first person shooters, fast moving airplanes or vehicles in military sims. Since this non-linear motion is such an integral part of networking a physics simulation, it’s important to get this out of the way at the start, hence cubes.
Now. Networking a physics simulation is just too easy if there is only one object interacting with a static world. It starts to get interesting only when the player controls an object that interacts with other physically simulated objects, especially if those objects push back and affect the motion of the player!
So here we have grid of 900 small cubes in the simulation and one larger player cube. Notice that when the player interacts with a cube it turns red. Also, when a non-player cube comes to rest it returns to grey (non-interacting). Observe that interactions aren’t just direct. That is, if a cube is red, it turns other cubes it interacts with red as well. This way player interactions fan-out recursively covering all affected objects.
This may look like just a nice visual effect, but in my experience tracking the set of objects the player is interacting with is the key to latency-free interaction between player and non-player objects. Also, assigning player ownership over objects is a good strategy to resolve conflicts when multiple players try to interact with the same object. I’ll talk more about this later.
OK. So it’s cool to be able to roll and interact with other objects as I collide with them, but I want something more dramatic. Some way that the player can push other objects around. What I came up with is this neat little mechanic:
To implement this I raycast to find the intersection point with the ground below the center of mass of the player cube, then apply a spring force (Hooke’s law) to the player cube such that it floats in the air at a certain height above this point. Then, all non-player cubes within a certain distance of that intersection point have a force applied proportional in direction and magnitude to their distance from that point, pushing them away from it.
That’s not all. I also wanted a very complex, coupled motion between the player and non-player cubes. Something where the player interacts with lots of other physics objects very closely, such that the player and the objects its interacting with effectively become a single system, eg. a group of rigid bodies under constraints.
To implement this I thought it would be very cute would be if the player could roll around and create a ball of cubes, like one of my favorite games Katamari Damacy
To implement this effect cubes within a certain distance of the player have a force applied towards the center of the cube. Note that these cubes remain physically simulated while in the ball, they are not just “stuck” to the player as in the original game. This means that the cubes in the ball are continually pushing and interacting with each other and the player cube via collision constraints. You can notice really cool effects like the ball starts to move slower as it gains more cubes and mass, and cubes cubes getting thrown out of the ball if you start to roll too rapidly. It’s really fun to move around like this and also a very complex situation for networked physics which is why I chose it!
That’s it for our exploration of the physics simulation.
In the next few articles we will network this simulation in different ways and discuss pros and cons of each approach.
Up first: Deterministic Lockstep
If you enjoyed this article please consider making a small donation. Donations encourage me to write more articles!
The post The Physics Simulation appeared first on Gaffer on Games.
Happy Holidays!
Please support Gaffer on Games!
Hi everybody, I’m Glenn Fiedler the author of Gaffer on Games.
If you have enjoyed the articles on this site over the past ten years, please consider supporting gafferongames.com with a small patreon donation. It takes a only small amount of money to host this website but a whole truckload of effort to research and write the articles that make this site worth reading. Many of these articles are only possible because I spend large portions of my spare time, holidays and time off work and between jobs researching and developing techniques to write about here.
My aim on this site is to always write clearly and explain game networking concepts that you won’t read about anywhere else. If you have read any of my game networking articles over the past 10 years I’m sure you’ll agree. You’ll find quality game networking information here that simply isn’t available anywhere else. Many people have learned game network programming with the articles posted on this website and this is something I’m very proud of. Well done!
Lately I’ve started including videos with my articles to explain concepts. It costs money to host these videos at high quality and a great deal of work, effort and so on to research and develop the game networking techniques I write about on this website. You can trust that I only ever write about things which I have implemented and this implementation takes a LOT of time. This is why my articles are so full of concrete examples vs. theoretical explanation but also it’s why it’s so much work to write articles for this website!
If you have enjoyed the articles on this website, please consider showing your support for my work by making a small patreon donation.
Networked Physics article series resumes once my hosting costs for 2015 are covered!
Wishing all my readers a happy new year and a great 2015. I hope to see you with more articles in the new year!
The post Please support Gaffer on Games! appeared first on Gaffer on Games.
Protected: Snapshot Compression
The post Protected: Snapshot Compression appeared first on Gaffer on Games.
Protected: Conclusion
The post Protected: Conclusion appeared first on Gaffer on Games.
Networking for Physics Programmers (GDC 2015)
Thanks to everybody who attended my talk today. I had a great time presenting for you.
The final slides for my talk are available here (850MB keynote. HD video)
I have also written an article series on this subject: Networked Physics