Whenever the easy-to-edit file format for something is different than the easy-to-use-in-your-game format, it is time to think about an asset pipeline. The key idea of an asset pipeline is that the files you edit (textures, 3D models, level data) are translated by code you don't ship into easy-to-read data formats, possibly through a series ("pipeline") of automated steps.
Thinking in terms of asset pipelines has three main advantages:
The easiest-to-load formats are ones you can copy into memory and immediately use.
std::unique_ptr< uint8_t > blob = load_binary_blob("sprite.file"); sprite = reinterpret_cast< Sprite * >(blob.get()); //or: size_t size = 0; std::unique_ptr< uint8_t > blob2 = load_binary_blob("player.mesh", &size); glBufferData(GL_ARRAY_BUFFER, size, blob2.get(), GL_STATIC_DRAW);
In practice, you often have structures that aren't quite this nice to load (e.g. contain
std::vector's), but you should still strive for this sort of simplicity.
Good format tips:
file.read(4, &size)than to do a bunch of seeking to try and find file size. When reading vectors, it's faster to
vector.resize()and do one read than to multiple.
Design Tension: Nice C++-standard-library data structures generally can't be loaded from disk in-place; while C-style pointer-based structures can (with "pointer fixup" hacks). You'll need to think which you want to use. I generally use the former -- I find that C++ standard library data structures are much friendlier to work with and the additional loading code is minimal.
Getting your easy-to-edit data into an easy-to-load format often involves little conversion scripts or utility programs.
Python is a great language for writing these scripts, for three reasons:
But don't be afraid to write small utilities in C++. One bonus is that it's very easy to put data into the structures your game expects. (You can even put these utilities into your main game code, accessed by special command-line options. This is somewhat inelegant but occasionally the most expedient way to do things.)
Makefiles or shell scripts are also useful as a way of stringing together a bunch of commands:
all : \ ../dist/textures/text.png \ ../dist/text.blob \ FONT=36daysag.ttf ../dist/textures/text.png ../dist/text.blob : *.txt ../dist/dump-glyphs Makefile ../dist/dump-glyphs ../dist/textures/text.png ../dist/text.blob '$(FONT)' *.txt zh/text.png zh/text.blob : zh/*.txt ../dist/dump-glyphs Makefile cd zh && ../../dist/dump-glyphs text.png text.blob '../wqy-microhei.ttc' *.txt
Design Tension: It might seem natural to integrate your asset processing tools into your standard build process. Doing so will avoid asset-out-of-sync errors that might otherwise cause you confusion. On the flip side, making asset processing part of the standard build process commits you to writing asset processing tools that work everywhere you want to build. This can force you to write far more robust code that you would otherwise need to.
ImageMagick is really useful for command-line image manipulation.
However, ImageMagick also continuously changes command line options, colorspace conversions, etc.
So be wary of pipelines that must work on multiple computers and use ImageMagick utilities like