🎉 Celebrating 25 Years of GameDev.net! 🎉

Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!

First Person Something Something

Started by
18 comments, last by nes8bit 23 years, 9 months ago
It''s an intresting idea, I looked at something similar based upon the huffman compression algorithim, basiclly try to use fewer bits to encode more proable values. The main problem is that the client and sever have to agree to the exact frequency for the values otherwise they wouldn''t generate the same encoded bits. We can''t dependent upon the client reciving all the data which the server sends (using UDP) due to loss, so there has to be a callback scheme. This increased bandwidth usage from the client to the server. Also due to the nature of probalistic compression, common values are compressed and others expand in bit count. This can actually cause an increase in packet sizes if the sampling window is too small or too large. You might want to look into those compression algorithims, they might be useful.

Though I''ve not implemented it, the best scheme i could come up with is a table of 32 compression tables stored both on the client and server side. Each compression table contains the encoded bits used to compress/uncompress a packet. They are generated by the server with a sampling window of 1 sec worth of client data (about 500-2.5k bytes worth of data in this case) Each table takes up about 256 bytes, and the server doesn''t use it in the compression scheme unless the client send the callback saying it recived it (this is all UDP based). The client doesnt compress its data to the server, only the server to the client. The tables are recycled continously. The server uses the best compression table to use for every compressed packet it sends. It''s very easy to calcuate the compressed size for a packet so it can easily run through all the tables and choose the best for each packet. The benifits of this scheme is that its adaptive, no need to send the compression header for each compressed packet, can handle loss, is server mediated, and has a small bandwidth footprint ( ~256 bytes/header + ~16 bytes for callback ) / sec.

Good Luck

-ddn
Advertisement
Very interesting tricks you got there.

---------------------------
"communists eat little kids." -The Alchemist
Re: Huffman tables

As I understand these beasts, they allow you to compress data by first sorting data by frequency in a binary tree, with short paths from the root to the most frequent data. Then each bit in the compressed data selects a left (0) or right (1) path through the data. Once a leaf node is reached, the data at the leaf is appended to the buffer and the decompression restarts at the root of the tree and the next bit in the data-stream.

This is useful for some types of data, such as speech synthesis (each speech sound, or phoneme, is at a leaf node), but I don''t see it as being particularly useful for unrestricted data such as you''d get in game-engine packets. A simple look-up table for commands should be adequate. As for the command-specific data, perhaps some other compression scheme could be used.

Ideas anyone?

Dave
Well you would create the huffman tree from the raw binary data of the packets. As i mentioned above, for fps games they usually create small packets with strict delivery requirements which limits the usefullness of the conventional compression schemes. If you tried to compress a 100 byte packet using zlib it acutally increases it by 200% due to the header. If you used a adaptive scheme like the arithmetic comppression algorithim, the sample data is so small that you''ll get 1-2% compression, hardly worth the time spent. But if you could cache the huffman tables on the server and client and seperate it from the packets, you can just refrence it using a single byte ( as i use less than 256 tables and dont want to mess with sub byte data boundaries for my packets) you would get major savings. Its also important to realize that the 100 byte packets contains very little redundancy, as it''s been compressed by hand already. That is to say, all redundant data, runs of zeros and data truncation already has been applied. It doesnt give you a random distrbution but its not easily compressed data. This scheme uses a history buffer of 32 recycled huffman trees which it searches for the best possible compression. Its possible the new data can''t be compressed by the given tables, so its comes out to worse case, time spent no compression. Truthly, from my test, i get about 30%-50% compression using buffered huffman trees on my packet data (with header of 1 byte). I never had the time to implmenet it but it would be a big savings.

Good Luck

-ddn
check out Mike Abrashes article on how quake does it. www.bluesnews.com/abrash/

InFerN0

Not all who wander are lost...
InFerN0Not all who wander are lost...
Oh yeah! How could I forget? Too bad it isn''t VB.

---------------------------
"Don't die for your country, make some other dumb bastard die for his" -General George S. Patton
Oh man. That has only Graphical thingies. :-/

---------------------------
"Don't die for your country, make some other dumb bastard die for his" -General George S. Patton
The part before the acknowledgements has some network info. Abrash basically says they used a client-server protocol and only worried about send-receive confirmations for critical data, but not for movement/jumping/shooting/etc. It''s an interesting read, but probably not useful for your problem.

GDNet+. It's only $5 a month. You know you want it.

I''m working on this right now, and here''s what I''m doing:

I''m sending absolute positions with trajectory info.
Something like this stores info for all of my characters,

typedef struct _CHAR_POS_INFO{
DWORD x;//x position
DWORD y;//y position
DWORD z;//z position
DWORD h;//heading in radians / degrees
DWORD v;//velocity in percent. 0 = 100% reverse | 100 = motionless | 200 = 100% forward
}CHAR_POS_INFO, *PCHAR_POS_INFO;

I wrap all network events into a message that has a header like:

_typedef struct _NET_MSG{
DWORD Length;//Length of the message
DWORD TYPE;//Operation/Data Type Identifier
}NET_MSG, *PNET_MSG;
Wow. That''s exactally what I''m doing. It''s just that I''m using a byte (char in C++) for the length and a byte for the type.

---------------------------
"Don't die for your country, make some other dumb bastard die for his" -General George S. Patton

This topic is closed to new replies.

Advertisement