Grand Prix Garage - GPL
The workings of GPL Online - Part 2
Disappearing cars, violent collisions, latency, and bandwidth explained.
(Below is a detailed discussion of the topic. See the part 1 for a discussion on the consequences for driving.)
This document is based on studies from GPL replays and network traces. Also some assumptions were made. So it may not be 100% true or accurate. Any comments that are aimed at improving this analysis are welcome by mail.
So we deal with client/server technology at two levels: the first level is within one system, with a server process for central tasks and a client process for every car; and the second level is in an online game, with one server system and multiple client systems. Some tasks of the server process are delegated by the client systems to the server system in an online game: detection of DQ conditions, keeping standings and recording lap times.
An online race is a cooperation of one server and one or more clients. The clients exchange information with the server. The clients do not exchange information with each other directly.
Each client sends information on the local car to the server, where it is handled by the client process that represents that car. The server sends information on the cars surrounding a car back to that car's client system for display. The bandwidth settings in core.ini determine how many surrounding cars are visible to the local car. For clients on modem class connections, a maximum of four cars in front and one car behind will be visible. For lan class connections, these numbers may be higher with a maximum of all cars being visible to all clients. See the section on bandwidth for details. I will refer to the modem class values from here on as they are most commonly used.
Four cars visible in front and one car behind is in terms of
relative track positions, not race positions. For example, at the
start of the race, the pole position can see the second place car
behind him (or next to him) and the cars that are last
on the grid, as they are the first he would encounter when
following the track.
When the positions on the track change, the server starts sending
information on a different set of cars. On the client this has
the effect that one car will suddenly appear on the track, and
another car will disappear a little later. The overlap is caused
by the prediction mechanism extending the life of the
disappearing car. The most clear example of this is at the start
of the race, when you see cars disappear as new cars are dropped
on the grid.
A car is dropped on the grid in 3rd place (far left). After a short while, the 2nd place car disappears as he is now the 5th car in front of the local car (which is not visible in these pictures). The pole is still visible, as he is the first car behind the local car, that had qualified last.
Besides display information, the server also sends information on the positions of all cars. You can see this effect if you watch a replay and the car you are viewing disappears as a result of changes in the positions of the cars. It can be invisible for laps in a row, switching the camera to the pit view, but the position data is still updated and the lap counter for the invisible car keeps incrementing when it crosses the finish line.
Another reason for disapearing cars, besides the limited number of cars displayed, is of course a bad connection somewhere on the path between the two systems involved. If display data for a car is not delivered for a certain period, typically 1 second, the car will disappear after another second. When data comes in again, the car will reappear. When things get too bad, a disconnect will occur.
Violent collisions occur when two cars are in overlapping positions when the collision is detected. Apparently GPL forces the overlap to be undone the next frame, giving the cars involved a very high speed. As a result the incident will look more like an explosion than like a collision (see my exploding grid for an extemely violent example).
So how come cars can be overlapping in the first place?
You can imagine what happens when three or more cars are involved, all reacting to each other with the latency delay.
Here is an example from the 1998 GMSS race at Spa:
My Ferrari collides with Brent Martin's car. We both spin. Off line, this would be a light accident.
Suddenly Brent is going straigth again, while I continue to spin. Data came in that indicated to my system that Brent was still going straight some time ago, and prediction extrapolated that. Parts come off our cars and my car is lifted as Brent's car is put in a position overlapping my car.
These two images are only 1 frame apart. I have done more than a 360 by now. Brent has been going straight on during my spin, and suddenly he is spinning to. Data came in indicating Brent's spin, that was started some time ago on his own system, when it detected the collision with the image of my car.
Bad warping, caused by extemely high latencies, can also be the cause of cars overlapping with the local car when they reappear, giving violent accidents.
On what basis are the remote cars displayed on the local client system, given that it takes some time for the data on a remote car to reach the local system? Several factors play a role here:
The simplest approach is to start with the moment that the display data for a remote car arrives at the local system. Two questions have to be answered:
1. How old is the data on arrival?
Following the path backwards, we have:
- travel time from server to the local client; best guess is half the latency (ping) reported on the local client
- wait for server update time; on average, this is half the net_xxx_server_send_every period set on the server or half the net_xxx_client_send_every of the remote client, whichever is less
- travel time from remote client to server; best guess is half the latency (ping) reported on the remote client
2. For how long is the data needed?
In constant conditions, the data will be needed for the net_xxx_client_send_every period of the remote client or the net_xxx_server_send_every period set on the server, whichever is greater. With varying latencies on the connections, the data may be needed for a longer or shorter time.
As an example, assume the latency on all connections is
constant at 200ms (round trip), giving a travel time of 100ms
between clients and server. The following table gives how much
time the prediction mechanism has to bridge for various settings
of client_send_every on the remote client and server_send_every
on the server (top row). The row labelled "1 (Arrival)"
gives the average lag at arrival of the data on the local system.
You cannot be more up-to-date than this (on average that is). The
row labelled "1+2 (Extinction)" gives the average lag
just before the next data comes in. On average, this is the
maximum you will lag behind.
The Max clients @ nn Kbps rows give the maximum number of clients
a server connection can handle as derived from the section on bandwidth, assuming that a maximum of
5 surrounding cars are visible on each client and the
server_send_size parameter is set to the smallest possible value
for the number of clients. The Max nn number depends only on the
send_every settings, not on the latency of the connections
(although high latencies will prevent you from using high
send_every settings)..
Data delay
values and maximum number of clients (assuming fixed latency of 200ms on all connections) |
||||
Client send every/server send every | 2/2 | 3/3 | 4/4 | 6/6 |
1 (Arrival) | 228ms | 242ms | 255ms | 283ms |
1+2 (Extinction) | 283ms | 325ms | 367ms | 450ms |
Max clients @ 33 Kbps | 2 | 3 | 3 | 5 |
Max clients @ 56 Kbps | 4 | 5 | 8 | |
Max clients @ 64 Kbps | 3 | 4 | 6 | 9 |
Max clients @ 128 Kbps | 6 | 9 | 12 | 19 |
Max clients @ 256 Kbps | 12 | 19 | 19 | 19 |
Max clients @ 300 Kbps | 15 | 19 | 19 | 19 |
Experience in Internet play has shown that the ping on connections can vary between 100ms or below (very fast) and 500ms or above (slow to the point of being unplayable). The influence of the core.ini parameters, especially server_send_every, is relatively small. So if the connections are good, increasing server_send_every may allow more clients to join while maintaining reasonable quality.
The trade off for smaller lags by decreasing the send_every parameters is of course bandwidth, which is discussed next.
First of all, what data is exchanged between the server and
the clients?
We have:
The rate at which the display data is sent, and the amount of data sent each transmission, come from parameters in the core.ini file in the GPL directory. They have the following default values. All frequencies are specified in ticks. All sizes are maxima; when there are not enough clients to fill the packet completely, a smaller packet is sent.
Core.ini bandwidth parameters | ||
Modem class: | net_mdm_client_send_every = 2 ; net_mdm_client_send_size = 84 ; net_mdm_server_send_every = 2 ; net_mdm_server_send_size = 84 ; |
Client packet freq on dialup Maximum Client packet size on dialup Server packet freq on dialup Maximum Server packet size on dialup |
Lan class: | net_lan_client_send_every = 2 ; net_lan_client_send_size = 132 ; net_lan_server_send_every = 2 ; net_lan_server_send_size = 388 ; |
Client packet freq on LAN Maximum Client packet size on LAN Server packet freq on LAN Maximum Server packet size on LAN (19 clients full display) |
Class selection: | net_use_mdm_bandwidth_for_tcp_ip = 1; |
On dialup links tcp/ip is forced to the modem class values.
Ipx will allways use the lan class values.
On cable or dsl links, tcp/ip will use the modem class values
unless net_use_mdm_bandwidth_for_tcp_ip = 0, in which case it
will use the lan class values.
From traces and replays I made using a server and one client in a local network, I noted the following figures. I checked them using an Internet connection between the systems as well. I simulated a multi client race by letting ai cars in. The ones listed here are the most important, look here for all details.
This amounts to the following formulae for bandwidth requirements:
Upload bandwidth per client | Bytes per second | Kilobits per second | ||||||
Client_send_every | 2 | 3 | 4 | 6 | 2 | 3 | 4 | 6 |
Local car data | 1620 | 1080 | 810 | 540 | 13.0 | 8.6 | 6.5 | 4.3 |
Download bandwidth per client | Bytes per second | Kilobits per second | |||||||
Server_send_every Server_send_size / surr.cars visible |
2 | 3 | 4 | 6 | 2 | 3 | 4 | 6 | |
36 / 2 cars (2 ahead, 0 behind) | 1620 | 1080 | 810 | 540 | 13.0 | 8.6 | 6.5 | 4.3 | |
52 / 3 cars (3 ahead, 0 behind) | 1908 | 1272 | 954 | 636 | 15.3 | 10.2 | 7.6 | 5.1 | |
68 / 4 cars (3 ahead, 1 behind) | 2196 | 1464 | 1098 | 732 | 17.6 | 11.7 | 8.8 | 5.9 | |
84 / 5 cars (4 ahead, 1 behind) | 2484 | 1656 | 1242 | 828 | 19.9 | 13.2 | 9.9 | 6.6 | |
132 / 8 cars (6 ahead, 2 behind) | 2232 | 1674 | 1116 | 17.9 | 13.4 | 8.9 | |||
196 / 12 cars (9 ahead, 3 behind) | 3000 | 2250 | 1500 | 24.0 | 18.0 | 12.0 | |||
260 / 16 cars (12 ahead, 4 behind) | 3768 | 2826 | 1884 | 30.1 | 22.6 | 15.1 |
Server upload and download bandwidth | |||||
# clients | surr. cars visible | srvr send_size | send_every cl/srvr | upload Kb/s | download Kb/s |
3 | 3 | 52 | 2/2 | 45.8 | 38.9 |
3/3 | 30.5 | 25.9 | |||
4 | 4 | 68 | 3/3 | 46.8 | 34.6 |
4/4 | 35.1 | 25.9 | |||
5 | 4 | 68 | 3/3 | 58.6 | 43.2 |
4/4 | 44.0 | 32.4 | |||
5 | 84 | 3/3 | 66.2 | 43.2 | |
4/4 | 49.7 | 32.4 | |||
8 | 5 | 84 | 3/3 | 106.0 | 69.1 |
4/4 | 79.2 | 51.9 | |||
8 | 132 | 3/3 | 142.8 | 69.1 | |
4/4 | 107.1 | 51.9 | |||
12 | 5 | 84 | 3/3 | 159.0 | 103.7 |
4/4 | 118.8 | 77.8 | |||
8 | 132 | 3/3 | 214.3 | 103.7 | |
4/4 | 160.7 | 77.8 | |||
12 | 196 | 3/3 | 288.0 | 103.7 | |
4/4 | 216.0 | 77.8 | |||
16 | 5 | 84 | 3/3 | 212.0 | 138.2 |
4/4 | 158.4 | 103.7 | |||
8 | 132 | 3/3 | 285.7 | 138.2 | |
4/4 | 214.3 | 103.7 | |||
12 | 196 | 3/3 | 384.0 | 138.2 | |
4/4 | 288.0 | 103.7 | |||
16 | 260 | 3/3 | 482.3 | 138.2 | |
4/4 | 361.7 | 103.7 |
An interesting setting to try is this. The server send size is changed from 84 to 132, allowing to see 6 cars in front and 2 behind. To compensate for the extra data, the send frequencies are reduced to 4. So a little more latency allows more cars to be visible. For this setting to work for dialup users, GPL must first be patched, see here. The settings must be applied to all systems involved.
Dialup users set:
net_mdm_client_send_every = 4
net_mdm_client_send_size = 84
net_mdm_server_send_every = 4
net_mdm_server_send_size = 132 ; (8 cars visible, 6 ahead, 2 behind)
net_use_mdm_bandwidth_for_tcp_ip = 1
Cable/DSL users set the same as dialup, or alternatively:
net_lan_client_send_every = 4
net_lan_client_send_size = 84
net_lan_server_send_every = 4
net_lan_server_send_size = 132 ; (8 cars visible, 6 ahead, 2 behind)
net_use_mdm_bandwidth_for_tcp_ip = 0
This will give the following bandwidth requirements:
Bandwidth for send_size 132, freq 4/4 | Upload Kb/s | Download Kb/s |
Client: | 6.5 | 13.4 |
Server (8 clients): | 107.1 | 51.9 |
Server (12 clients): | 160.7 | 77.8 |
Server (16 clients): | 214.3 | 103.7 |