I've got an old game data file that contains binary data and would like your opinion as to which design pattern(s) I could optimally use to load the data.
The File Spec in a nutshell-
Header: General info and offset to Data Dictionary
Data Dictionary: Sequential set of offsets to and sizes of Blocks within the file
Block: Binary data representing a sequential set of one particular type of game object. (There are sizeof(Block) / sizeof(particular game object) objects in a block)
The Blocks can be nested and contain other Blocks. A "Mission" Block contains many "Level" Blocks and a "Level" Block contains many sub-Blocks such as vertex data, enemy data, and so forth and so on.
I could easily create a god class where everything is read kinda sequentially, but that kinda defeats most of the beauty of a well designed C++ API.
Maybe a Strategy pattern? What would be best?
I was initially thinking of having a main class that represents the loaded game data. I'd have a base GameBlockBase from which the different game object block types would derive like GameMissionBlock, GameLevelBlock and so forth. Then, something like a GameDataObjectBase where GameDataVertex and GameDataAudioSound would derive.
The main class would just hold vectors of GameBlock* and maintain the current file offset. It might create a GameBlock* and call GameBlock->Load(fileData, &offset) and append that block to the appropriate vector.
Internally, in GameBlock->Load(..), each particular derived GameBlock would know how to consecutively create GameDataVertex and call ->Load(..) on that until EOF.
This is clearly not well fleshed out, but it's because I'd love to get your opinions before I go any farther.
I don't think binding the internal representation to an external serialized format is a good idea. You should separate the two, and keep serialization to/from file as a distinct operation.
What if, for example, you wanted to support a different format, more suitable for a stream?
have you considered the kiss design pattern instead?
Half kidding, but don't over think or over cook the thing.
make your OOP tools and all for the actual data represented.
do a simple file to memory read once and then each section can read the same byte dump and parse out what it needs, in parallel, to populate the OOP objects you made.
You said it was old, so assuming the whole file fits into one block of memory without any stress, eg no bigger than 4 gb or so?