A snake which bites its tail: PyPy JITting itself
Briefly

"Readers brave enough to run translate.py to translate PyPy by themselves surely know that the process takes quite a long time to complete, about a hour on super-fast hardware and even more on average computers. Unfortunately, it happened that translate.py was a bad match for our JIT and thus ran much slower on PyPy than on CPython. One of the main reasons is that the PyPy translation toolchain makes heavy"
"So, today we decided that it was time to benchmark again PyPy against itself. First, we tried to translate PyPy using CPython as usual, with the following command line (on a machine with an "Intel(R) Xeon(R) CPU W3580 @ 3.33GHz" and 12 GB of RAM, running a 32-bit Ubuntu): $ python ./translate.py -Ojit targetpypystandalone --no-allworkingmodules ... lots of output, fractals included ... [Timer] Timings: [Timer] annotate --- 252.0 s [Timer] rtype_lltype --- 199.3 s"
Translation of PyPy using translate.py historically takes roughly an hour on high-end hardware and longer on average machines. translate.py performed poorly on PyPy previously because the translation toolchain relied heavily on custom metaclasses that disabled central JIT optimizations. During a recent Düsseldorf sprint, developers fixed the metaclass-related problem and re-enabled the optimizations even when metaclasses are present. A benchmark run was performed translating PyPy using CPython on a 32-bit Ubuntu machine with an Intel Xeon W3580 and 12 GB RAM, producing a total time of 2234.2 seconds for the full process.
Read at Antocuni
Unable to calculate read time
[
|
]