A year ago, I decided to revamp my 3d tile editor so that the data structures, data handling and scenes were scalable. It added a few features along the way that I thought would be worth the effort.
I started by revamping my scene so that all the tiles were stored in some efficient data structure like an octree. And use an octree to manage the amount of polygonal data that is placed in each separate game object so that I can minimise the amount of draw calls and manage my geometry as well.
There was a lot of pre computational exercises that have for the most part been replaced or just scrapped entirely, because proper computational geometry could easily replace them with a lot more robust and manageable structures.
Felt I needed a more configurable way to build my tiles so I spent a few months implementing robust CSG operations using BSP trees and fixed point linear equations. That was very educational and fun to do. Managed to solve all the related issues that come with them, such as the t-vertex issues, simplification, input conversion, rounding heuristics for linear systems so that the GCD operations would return smaller numbers. And in the case were my primary systems could not simplify and compact the data enough, one can fall back on big number libraries to ensure that the integer calculations do not overflow.
After the baby was born, I rewrote all the editor logic and simplified according to what I have learned from my previous prototype. Replaced previous features with something that is simpler or more powerful. I have managed to get quite far during the last 3 months while taking care of the infant, even though I don´t get more than a couple of hours in of work per day. All my current features I need to implement are relatively small in scope and go nicely with the otherwise busy life of tending to a demanding 15 week old baby boy.
I wrapped up my persistence features using FlatBuffers, which I started to look at 9 months ago but held of until I had all the dependent systems in place.
The internals were split into small reusable components so that I could create libraries out of them and minimise the size of the scenes on disk as well as their size in memory. The final data layout reminds me more of a traditional static vs. dynamic libraries that programmers are used to dealing with. The final scene holding references to libraries with versioning information and indices, and when the scene is loaded the dependent libraries are loaded and their respected data.
The final geometry data is inferred from the library data, so it does not have to be part of the saved information, even though a cached version of it would benefit large scenes to not have to recreate the triangles every time the scene is loaded. My scenes files are for the most part just indices and references, but in the case the tiles are not stored in a library, they are stored along with the scene. As well as being able to store all the dependencies in one scene when exporting to distribute among friends or something to that effect.
And since my scenes are not some arbitrary executable binaries that depends on all its dependencies to be there to be able to be loaded, I will basically allow all the missing tiles to be populated with some predefined containers that allow you to either fix or replace the faulty tiles.
I am using FlatBuffers for all my persistence needs and they seem to work as advertised, but I had some wrong assumptions attached to them to start with. I have used libraries where a direct copy and pointer fix up was made from the saved data so that all you need to do is load the binary and cast it to your in game structure. That is very fast and efficient, but for having the data cross platform, you would need to adjust your bytes in some cases. But, FlatBuffers do “sort of” work like this. They read in all the data as one block, then give you wrapper classes to interpret each section of the buffer of bytes for your structures and objects. It offers a very good approximation to an ideal setup with a few features that might become handy down the line.
I completed writing the CSG operation logic and it passes all the tests that were described in the original paper this method was introduced in. To be able to test and reproduce faults in the system, I created a CSG file format that is basically a list of commands to execute that will then return a solid polygonal soup to be used. I can replay the commands back like a sequence of operations to track and follow the construction of the object. Makes for a cool effect while the objects are relatively simple.
I changed the way I handle polygonal data so that I ensure that the system can operate on coefficients as large as 32 bits and refactored and simplified the system quite a lot. Very pleased with the results.
Now I need to focus on the tools to create the objects with, since the backend, as measured by the test results and statements in the original paper pass as “unconditionally robust”. I ran solid test on the huge lump of ugly goo that these stress tests produced and it all came back green, no t-junctions and no non-manifold polygons. I can mentioned a handful of rather expensive content packages that would fall over rather ungracefully when trying to replicate the same tests, using their fragile, point based methods.
The test below is the Stanford bunny sculpted by either an icosahedron or a dodecahedron that is subtracted from a box, and sampled from the volume around the bunny model.
So, as far as complicated back-end systems go, I am set.
Most of the editor integration is there, I need to create the tools for sculpting the tiles next. Control the placement and dimension of the primitives that are used.
Here is just a simple playback of a previous random box cutting operation.
Something in that vain, where I focus on a centre tile and sculpt it with spheres, prisms, pipes and other usefully primitives.
Well, no rest for the wicked.