Coconut (coconut-lang.org) compiles to Python (and the syntax is a strict superset of Python) but does automatic tail call optimization, so I was curious how fast this would be in Coconut, and surprisingly it looks like Coconut's automatic tail call optimization is twice as fast as pfun at deep_left_bind, despite the fact that Coconut does everything in pure Python. See below (and you'll need to run pip install -U coconut-develop pfun to replicate; I used coconut-develop==1.5.0-post_dev49):
Hi Evan! Coconut is an awesome project! This benchmark is pretty impressive. I can think of a few reasons why Coconut's tail call optimization is faster than pfun in this benchmark:
pfun does other things besides effect interpretation and trampolining, most chiefly it integrates with asyncio. So in fact, the run function calls asyncio.run on the awaitable produced by effect interpretation. pfun also manages thread and process pool executors to avoid blocking the main thread when interpreting io and cpu bound effects. This also adds overhead.
Is coconut's TCO a trampoline, or does it actually eliminate stack frames? The latter case will clearly be faster, as using a trampoline introduces more function calls to the interpretation process.
When I started this project I did actually look at coconut as an implementation language for pfun (I reckoned that writing a functional library in a functional language would be more fun). I eventually opted for pure Python because I wanted static type checking to be a core feature of pfun, and at the time Coconut did have mypy integration, but I was not completely satisfied with having type errors reported on the generated code and not the coconut code itself. Maybe that has improved since.
In any case, I love coconut, and would love to see some of its features exported to Python :D
For further actions, you may consider blocking this person and/or reporting abuse
We're a place where coders share, stay up-to-date and grow their careers.
Coconut (coconut-lang.org) compiles to Python (and the syntax is a strict superset of Python) but does automatic tail call optimization, so I was curious how fast this would be in Coconut, and surprisingly it looks like Coconut's automatic tail call optimization is twice as fast as pfun at
deep_left_bind
, despite the fact that Coconut does everything in pure Python. See below (and you'll need to runpip install -U coconut-develop pfun
to replicate; I usedcoconut-develop==1.5.0-post_dev49
):Hi Evan! Coconut is an awesome project! This benchmark is pretty impressive. I can think of a few reasons why Coconut's tail call optimization is faster than pfun in this benchmark:
run
function callsasyncio.run
on the awaitable produced by effect interpretation.pfun
also manages thread and process pool executors to avoid blocking the main thread when interpreting io and cpu bound effects. This also adds overhead.When I started this project I did actually look at coconut as an implementation language for pfun (I reckoned that writing a functional library in a functional language would be more fun). I eventually opted for pure Python because I wanted static type checking to be a core feature of pfun, and at the time Coconut did have mypy integration, but I was not completely satisfied with having type errors reported on the generated code and not the coconut code itself. Maybe that has improved since.
In any case, I love coconut, and would love to see some of its features exported to Python :D