Amazing! Thanks for making the code more Pythonic! But, won't the if in the list comprehension be computed k times? In my version, it was computed just once, but then the del made O(k) shifts, so both seem nevertheless sort of balanced. I don't know precisely why, but after I benchmarked your code with mine, my code still ran in a quarter of time taken by your code.
Yes indeed, I just got my hands on a computer and I tested everything out, you were faster than me :)
I have a very slight increase in performance with this version :
frommathimportceil# no dynamic lookup of ceildefsummands_opt(n):k=ceil(((1+8*n)**0.5-1)/2)# one affectation lessthe_sum=k*(k+1)//2# no casting, python will make an intsummands=list(range(1,k+1))delsummands[the_sum-n-1]returnsummands
The del statement does not fail when there is nothing to delete.
Don't listen to this tired programmer who did too much Ocaml lately: the condition is necessary because of how python interprets negative indices.
I tried other versions, thinking that using more iterators should be faster, but surprisingly no. Here are all the codes and timings I did:
importmathfrommathimportceil# no dynamic lookup of ceilfromitertoolsimportchaindefsummands(n):k=((1+8*n)**0.5-1)/2k=math.ceil(k)summands=list(range(1,k+1))the_sum=int(k*(k+1)/2)ifthe_sum-n>0:# If n is not senior.delsummands[the_sum-n-1]returnsummandsdefsummands_opt(n):k=ceil(((1+8*n)**0.5-1)/2)the_sum=k*(k+1)//2summands=list(range(1,k+1))ifthe_sum-n>0:# If n is not senior.delsummands[the_sum-n-1]returnsummandsdefsummands_iter(n):k=ceil(((1+8*n)**0.5-1)/2)the_sum=k*(k+1)//2pluck=max(0,the_sum-n)returnlist(chain(range(1,pluck),range(pluck+1,k+1)))defsummands_list(n):k=ceil(((1+8*n)**0.5-1)/2)the_sum=k*(k+1)//2pluck=max(0,the_sum-n)returnlist(range(1,pluck))+list(range(pluck+1,k+1)
The last two surprised me a lot: how can the chaining of iterators be slower than concatenation of lists? But unsurprisingly the definition of a custom generator to chain was even slower…
However, once removing the casting to list, summands_iter would be the only version that do not cause an overflow for very large numbers :
defsummands_iter(n):k=ceil(((1+8*n)**0.5-1)/2)the_sum=k*(k+1)//2pluck=max(0,the_sum-n)returnchain(range(1,pluck),range(pluck+1,k+1))>>>summands_opt(2**60)Traceback(mostrecentcalllast):File"<stdin>",line1,in<module>File"/home/christophe/python/partition.py",line17,insummands_optsummands=list(range(1,k+1))MemoryError>>>foriinsummands_iter(2**60):print(i)# still printing !
Amazing! Thanks for making the code more Pythonic! But, won't the
if
in the list comprehension be computedk
times? In my version, it was computed just once, but then thedel
made O(k) shifts, so both seem nevertheless sort of balanced. I don't know precisely why, but after I benchmarked your code with mine, my code still ran in a quarter of time taken by your code.Yes indeed, I just got my hands on a computer and I tested everything out, you were faster than me :)
I have a very slight increase in performance with this version :
The
del
statement does not fail when there is nothing to delete.Awesome! Now this is some production-quality code! Thanks for adding readability to the code as well.
Don't listen to this tired programmer who did too much Ocaml lately: the condition is necessary because of how python interprets negative indices.
I tried other versions, thinking that using more iterators should be faster, but surprisingly no. Here are all the codes and timings I did:
The last two surprised me a lot: how can the chaining of iterators be slower than concatenation of lists? But unsurprisingly the definition of a custom generator to chain was even slower…
However, once removing the casting to list,
summands_iter
would be the only version that do not cause an overflow for very large numbers :Whoa! That was some deep analysis!