Divergence, however, is not a malady that has to be avoided at all cost. It really is a virtue of the diverging series. The mathematician George Carrier made the somewhat scandalous yet heuristically verifiable claim that “Divergent series converge faster than convergent series because they don’t have to converge.” True enough the sum of the first few terms of the divergent series already gives an accurate approximation of the exact result. This points to the utility of divergent series and hints that divergent series are meaningful mathematical constructs. Indeed they are and they have a name: They are called asymptotic series.
In general, the terms of an asymptotic series initially decrease in magnitude and then monotonically increase without bound. The sum of the first few terms up to the least term is known as the superasymptotic sum of the divergent series, and it is already an accurate approximation of the exact solution. A convergent series representation of the exact solution may need a much larger number of terms to match the accuracy of the superasymptotic sum, indicating that a divergent series can be much more useful than a convergent one.
However, adding more terms beyond the least term only degrades the accuracy of the superasymptotic sum so that it has a limited accuracy. One is naturally led to ask—Is it possible to go beyond the accuracy of the superasymptotic sum using only the information contained in the divergent series? The superasymptotic sum exploits only the first few terms up to the least term, and ignores the rest of the infinite series, aggregately known as the remainder term. Indeed, the remainder can be manipulated to obtain greater accuracy.
The remainder term can be re-expanded in such a way that it will manifest the behavior of an asymptotic series—initial convergence of the terms followed by their eventual divergence. A superasymptotic sum of the first few terms is made, and the sum is added to the first superasymptotic sum. The result gives better accuracy. This process can be repeated, re-expanding the remainder term and then performing superasymptotic expansion. However, the process cannot be repeated indefinitely, so that the accuracy has a natural barrier. This method of summing a divergent series is known as hyperasymptotic summation, and the sum is known as the hyperasymptotic sum.
One is similarly led to ask—Can we break through the hyperasymptotic accuracy barrier? Micahel Berry, the inventor of hyperasymptotics, asked this question much earlier; and if a summation existed that extended the accuracy of the hyperasymptotic sum, he would call it ultra-asymptotic summation. Just recently we found a numerical evidence that the accuracy of hyperasymptotics can be surpassed.
In the talk “Divergent integral just got more convergent” I gave at the Samahang Pisika ng Pilipinas (Physical Society of the Philippines) National Conference this year, I discussed how the standard Poincare asymptotic series can have a spectacularly accurate resummation. Spectacular because the resumation may be up to 70 orders of magnitude more accurate than the Poincare asymptotic expansion itself. The resummations were discovered by Kaye Martinez and myself in consideration of the asymptotic expansion of exponentially small Hankel transforms using the distributional approach to asymptotics. This was reported in here last year; and an updated version of the paper is here. There it was only hinted that the resumed Poincare series might offer a way towards ultra-asymptotic summation.
In the months following the publication of Kaye’s paper, Reiseth Fajardo made some progress in his MS thesis (which he successfully defended early this year) in comparing the relative accuracies of the resumed Poincare series and the hyperasymptotic sum of the same series. My talk culminated with my report on Reiseth’s numerical results that showed that the resumed was more accurate than the hyperasymptotic sum of the Poincare asymptotic series of a particular known integral. This is an evidence that the barrier in the hyperasymptotic accuracy can be surpassed. However, I cannot make general claims out of this single evidence at the moment. Hopefully, Reiseth will be able to provide a compelling evidence that indeed a resumation exists that can surpass the accuracy of the hyperasymptotic sum in the general case.
The presentation is found here.

How would the non-asymptotic scale of the recent results (as discussed in the presentation) “change the game” so to speak, for the theory of divergent series, which is already quite well-developed in the case of asymptotic (and in particular Gevrey) series?
The current theory of asymptotics, including Geverey asymptotics, is based on asymptotic scales. An asymptotic scale is a sequence of functions with the property that any element of the sequence vanishes faster than the element that immediately preceded it as the asymptotic parameter gets, say, arbitrarily large. This property is desirable as it enables us to compute the asymptotic expansion explicitly in terms of a given asymptotic scale. In fact it is the expansion of a divergent series in such a scale that the series gets identified as an asymptotic series. However, it turns out that there are divergent series that are expanded in terms of a sequence of functions that is not an asymptotic scale. In the paper of Kay and myself, it is shown there that such a divergent series in non-asymptotic scale can arise from a resummation of a divergent series in an asymptotic scale. The most unexpected result there is that the series in non-asymptotic scale is several orders of magnitude more accurate than the same series in asymptotic scale. So if the purpose of asymptotics is approximation, then non-asymptotic scales may offer more accuracy than asymptotic ones. This certainly is a “game changer” as it behoves us now to reconsider the foundations of asymptotic theory.