I have been very busy the last few months. My project work, moving house and my normal day job.
My alarm still rings at 5:30 in the morning and I still put on my pajamas and head to the computer to try and get something done until other responsibilities need attention. But the other responsibilities have changed a bit. I don’t need to pay any attention to my current employer since I don’t need to do any work for the full 3 months notice. But I do have a 4 day old baby boy that is taking up most of my attention these days. So, the alarm and other timing issues might be in a bit of a flux for a few days as well.
A week ago, I managed to wrap up all my work relating to my set-theoretic polygonal operator. Started of as a feature creep in April and extended for a good while to get the whole end-to-end solution operational and happy.
The biggest problem with any set-theoretic operations on geometry is numerical robustness. It is very easy to create cases where using floating point numbers and their accumulated errors will break your algorithms. Having good data is what matters the most. Reliable data that can be trusted to stay correct between all the operations of the system.
Floating point numbers are a 90 year old idea that worked very well with the problems at the time. But today, they are causing a lot of problems. There is a new proposed numerical system called the unum, which tries to solve the problems of the floating point numbers. Having variable accuracy, tracking error state and cutting of computations beyond the point of your predefined accuracy.
Even with such a number type, you would have to alter your solutions somewhat to take advantage of it.
Making robust systems seems to have not been given enough spotlight. As soon as we create an approximation of a working system we sell it as complete and patch things afterwards. I don’t like those sort of solutions really and I like reliable things that behave correctly and can safely grow.
There was a paper published in 1989 about a method to create a numerically robust system for computational geometry that seemed to go unnoticed. The method is not widely used, probably because of sunken cost and active research into trying to make things work correctly with this 90 year old number system. ‘We are almost there, dude’.
But the simplicity of this article and its solution is quite impressive. This was picked up again 20 years later and applied to a Binary Space Partitioning tree to sculpt 3D geometry. Not a lot of noise generated, mostly because people have in part just given up on trying to use boolean operations since it is a long dark pitfall of special cases and crashes, so any other sort of method should be used.
The original article “A solid modelling system free from topological Inconsistency”, advocated using integers and whole numbers with linear plane equations to make the system completely robust as long as you can prevent overflow. The number of operations is fixed and predictable, so you can easily compute how large the data type for the coefficients needs to be, to prevent any sort of computational overflow. In my case M = 24L^5, where L is the largest coefficient within the system and M the maximum whole number size that works within an integer type. For a 32 bit integer, L=38, for a 64bit integer L=3288.
32 = 38
64 = 3288
96 = 277 669
128 = 23 448 747
160 = 1 980 211 661
192 = 167 225 916 958 … exceeded 32 bit coefficients.
224 = 14 121 978 900 071
256 = 1 192 580 023 962 346
320 = 8 504 944 325 723 147 264
Passed this point, you need more than 64 bits to store your coefficients.
For some, these numbers for their polygons would give too small of a resolutions. Well in that case, you can fall back on a library like “ttmath” and use just about any size integer you need.Making sure that the input data is good will ensure that the output data is good. I did add an additional method for compacting the numbers by rounding the vertices used by the large triangles to integers with the most prime-factors, to maximise the odds of getting a good common divisor, and in the case where that method fails, the polygon is tessellated into polygons with shorter edges so that the area is smaller for the determinants used in the plane equation setup. And the mix of those methods work obscenely well for my geometry and the resolution I found to be good enough for my editor. So, in essence I don’t need to use the additional bits for my computations since I can use 64 bits to do my fixed point calculation without the risk of overflow. A lot more efficiency, simplicity and portability for me. With the addition of my octree, I can keep all the operations local to each node of that octree so I can make the shapes themselves as large as I want.
The first piece of the puzzle is to convert triangle shapes into plane based polygons. It is quite easy to generate the polygons and planes but it would be best to try and keep the triangles small. You can look into something like voxels, then use cube marching algorithms or constrained elastic surface nets to create a more uniform surface with smaller polygons then what is traditionally generated from procedural triangle soups. As long as you can keep the numbers good and clean for the equations, you can pick whichever method works best.
The centre item of the puzzle is based on an article called “Fast and Exact linear booleans”. And it sets up simple predicates and heuristics to be able to cut the plane geometry without introducing any sort of point data and handling most degenerices automatically. Allowing you to create a fully robust set-theoretic operator for your polygonal geometry. I do in addition try to undo any unwanted cuts to the polygons with this operation. If there are two branches classified on the same side of the half space, you just remove those branches and undo those cuts. I also do a couple of other additional passes to merge additional pieces and try to cut the number of plane polygons down to a minimum to keep the shape.
The final part of the puzzle is to generate the polygonal geometry correctly. Just spitting out the convex polygons will give you a solid geometric shape but it will have a lot of t-vertices. The t-vertices can cause rendering artefacts for the rasteriser and can make it very difficult to morph or animate the shape you have created. For that purpose I create a simple edge graph structure that keeps tally of the cuts that have been added to the polygon and keeps them ordered. To generate the final triangle from those edges, you walk over the edges of the convex polygon, if an edge has been cut you pick the first entry and jump over to the next edge in the list and find its first entry as well. This will give you an diagonal edge with only two vertices so that the convex polygon can be created with a simple polygon fan. But, you need to fan out the other half of the polygon this diagonal edge just cut as well to finish the generation. I might look into planar triangle simplification and constrained Delauney triangulation to get rid of the splinter polygons sometime later down the road. But that would be part of some export step in the future.
As I mentioned before, I don’t use tree merges and I don’t use the sub hyperplane for maintaining the convex separation between the half spaces. The tree is just generated on demand when you want to manipulate the shapes even though the implementation is part of the tree. If I see a use for it sometime in the future I might look into using it, but at the moment it is only more costly and I am more tempted to keeping an implementation that could be made parallel than an implementation that requires too much traversal.
My most recent performance tests place my worst cases to be not bad at all, the most expensive operation is the BSP tree generation. Which would not be improved at all by having the sub hyperplanes and tree merging setup.
A test case of a bunny(4196 triangles) and a torus(576 triangels).
– Machine. Intel Core(TM) i5-4670 @3.40Ghz with 8 GB of Ram.
– 0.32 sec to generate the BSP tree.
– 0.12 sec to compute the set-theoretic operation.
– 0.42 sec to generate the polygons from the planar geometry and create a new bsp tree.
All these steps are split up nicely so that I can move them into threads and distribute them over a few frames. Delay the polygonal generation by outputting the raw polygons from the planes as a temporary filler until the background thread wraps up its stuff.
The API, from importing, producing procedural planar shapes, down to shaping the geometry and producing the final geometric output is very nice and clean. Put a lot of time into it being as slim and maintainable as possible.
Right now, I am rewriting the old editor code to be simpler and trim it down for a good vertical slice of the basic set of features. Replacing the previous prototype implementation with something that is cleaner and mature. All my previous editor implementation is sitting behind comments until I have it completely replaced. Not solving the same problems again, just making sure that they are solved in a sane fashion. This is the painting, camera navigation, scene navigation, persistence and user facing editing features. Almost all of my previous geometry code has been made redundant, but the high level data structures are still applicable for spatial and caching reasons.
I will only be putting in a couple of hours a day of work into this for the next few weeks. My newborn son needs good attention and to be bootstrapped for good health.
Since I will be working more on user facing features, this blog might get a bit more active in the coming weeks.