Gen 1 had some significant accuracy issues, caused by arc interpolation. While more accurate and faster than pathfinder, it used an imperfect approximation to solve for the inefficient path length parameterization.
This generation will attempt to take the best of both worlds by doing smart numerical integration of the path length, and reusing past calculations as much as possible.
Lachlan came up with this new strategy:
So part of my current strategies have made the assumption that it is necessary to pre-calculate/approximate the total arc length of the path before generation is started (as I use that to set a displacement for the velocity profile and create an approximation for how much memory I need to reserve to reduce overhead of constantly stretching a std::vector), and as far as I can tell that calculating the arc length first tends to be done elsewhere as well (pathfinder, wpilib, etc).
And so, when you calculate without a geometrical approximation of the path or an arc-length approximation, what you tend to do is iterate along it very slowly like pathfinder does (I might add, pathfinder does this really really inefficiently, as it starts calculating from the start each time it needs to do a lookup) making thousands of tiny advancements.
Now, what my thoughts are, is that you don't necessarily need to pre-compute the arc length at all.
The rough strategy I've come up with goes roughly as such:
Lachlan got 3.8mm of error and 0.14 degrees on a path that executes in 3.11 seconds, and 22mm of error and 0.2 degrees of error w/ 13.2s.
Slightly increased generation time.