loading...

Discussion on: 99 problems but Python ain't one!

lietux profile image
Janne "Lietu" Enberg

Some of what you say has a point. Yes, claiming Python is close to C in performance is untrue in most cases, the point is that it doesn't make much of a difference.

Sure you can use Pypy or Cython to make make your code run faster, but then you risked running your code on a less popular and less supported runtime, or sacrificing portability.

True to some extent, but PyPy is just a drop-in replacement with no special considerations to keep in mind in a lot of cases, and nobody is suggesting writing your whole codebase with Cython or the like, just the bits that are performance critical.

but you can't deny that faster languages (and there are many of them) require less resources which is beneficial especially if you run your programs on the cloud where every CPU cycle counts as money.

I actually can deny that. Generally on the cloud, unless you're using something like Lambda, you pay for uptime, not for CPU cycles used. It doesn't matter much if your CPU is 90% idle, or 99% idle, you pay the same. When building applications you host in the cloud your performance constraints tend to be I/O related - how fast does some other service respond, how fast is the internet connection, how fast is the disk, etc. - instead of how fast is your CPU.

Add to that the fact that most of the time your performance problems that you can directly affect with your code are "algorithm" problems (as people like to call them), i.e. just doing things in a less than optimal way, instead of your actual programming language being slow. These problems are much more easily solved in a programmer friendly language, such as Python.

Also, when people complain about CPU cycles costing in the cloud, they generally have not fully grasped the concepts of "budgeting", "scheduling", "good enough" and "the real world". In the real world, you tend to build applications that are good enough for your needs, you have a schedule you need to keep, and a limited amount of money to do that with.

If you spend your time and money trying to build your micro-optimized application in C, instead of building it in Python, you will find out that you have the same performance problems as you would've with Python, but you ran out of time and money to release and you go bankrupt.

Yes but that'd probably mean you're choosing the wrong tool for the job. For most of the stuffs Python do, you're not gonna need to resolve to C/C++ as alternative. However there are certain applications that need C/C++ level of performance that Python just won't be able to achieve.

Sure, there are areas where Python isn't the right tool, and there area areas where C isn't the right tool. Btw MicroPython is a cool tool for embedded programming too.

That doesn't mean that if you're writing a tool that needs to process a few hundred million entries in the database you should write it in C because it's faster. What you SHOULD do is write it quickly in Python, realize you don't want to wait 24 hours to see the results, and slap on multiprocessing to parallelize the task to 24 processes and wait an hour instead.

You end up spending much less of your time to solve the problem, and get probably a better, more easy to understand, easier to refactor, etc. end result.

Caring about performance is a good thing. It will not only result in faster code, but also make you a better programmer.

Now you're just trying some random strawman fallacy, nobody said you must not care about performance. Performance profiling and optimization is vital, if you have issues. If you have no issues, it's generally pointless.

Someone once said "premature optimization is the root of all evil", and that starts with your choice of language. Python is good enough for most purposes. There are purposes where it's not, and you need to think a bit to make sure you choose the right tool for the job in every case.

To me it's not just a matter of what personal preferences I have in a language, or their supposed performance characteristics in some artificial test suite published online, but a matter of practicalities.

  1. Who's going to be working on the software - what languages and tools do they know, and what can they learn in the time necessary?
  2. How much time do I have?
  3. How much money do I have?
  4. What are the specific requirements I must take into account - does it need to run in 16kB of RAM and respond within nanoseconds, or is it good enough if we speak about megabytes and milliseconds? Does it need to be scalable first (both in terms of team, as well as deployments), and performant later?