I really love @Eric Appelt's answer. It's amazing. For those who just want a concise and boring answer, here's my attempt at that:
In cpython (the reference implementation of Python), there's this thing called the GIL, or Global Interpreter Lock. I believe there are other implementations of Python that don't have it.
The GIL means that for a given Python process, only one CPU core will ever be used, regardless of how many threads you may have running.
Let's say you have a computer with a 4-core CPU. The GIL means that, contrary to what you may expect, if you have 4 threads, and they all want to use the CPU a lot, your program won't run any faster for having threads! In fact, it will probably run a bit slower!
So what are threads useful for in cpython? If your threads are I/O bound, that is, threads are often just sitting idle waiting to get results back from the disk drive, or fetching things from the Web, stuff like that, then you can get a benefit out of multithreading. The threads that want the CPU can use it while the idle threads sleep.
If you do want to utilize more than one core, or more than one CPU for that matter, you can do so in cpython with the multiprocessing module. This can be annoying though, since it is my understanding that each process will have to run its own instance of the Python runtime. I have not used this module, so I am not sure how communication among processes is handled. Traditionally, that is supposed to be one benefit of threads over processes, that all the threads share access to the heap.
It's not really for 5 year olds, but I hope this explanation is clear for someone who knows some programming basics.
Eric Appelt mentions in this discussion that numpy code, for example, appears to run independently of the GIL, since it is compiled native code rather than Python code that is run directly by the interpreter. Apparently that is another case where the GIL will not interfere and allow code to run on multiple cores at a time. However, I have never used numpy, so I don't know any details.
If you do want to utilize more than one core, or more than one CPU for that matter, you can do so in cpython with the multiprocessing module. This can be annoying though, since it is my understanding that each process will have to run its own instance of the Python runtime. I have not used this module, so I am not sure how communication among processes is handled. Traditionally, that is supposed to be one benefit of threads over processes, that all the threads share access to the heap.
I wouldn't say it's annoying, just don't expect the overhead to be like 3kb of memory or something.
The various types of creating a process are explained here in the doc docs.python.org/3.6/library/multip... - but long story short: you can either spawn a process (a new interpreter) or clone the existing (fork). The default is the clone.
Processes can communicate through queues or pipes or they can share parts of the memory (not recommended).
Queues and pipes are implemented using unix pipe, which is not that different from what happens when in Unix you do:
cat verylongfile.txt | sort
Those two totally unrelated processes communicate through a unix pipe.
For further actions, you may consider blocking this person and/or reporting abuse
We're a place where coders share, stay up-to-date and grow their careers.
I really love @Eric Appelt's answer. It's amazing. For those who just want a concise and boring answer, here's my attempt at that:
In cpython (the reference implementation of Python), there's this thing called the GIL, or Global Interpreter Lock. I believe there are other implementations of Python that don't have it.
The GIL means that for a given Python process, only one CPU core will ever be used, regardless of how many threads you may have running.
Let's say you have a computer with a 4-core CPU. The GIL means that, contrary to what you may expect, if you have 4 threads, and they all want to use the CPU a lot, your program won't run any faster for having threads! In fact, it will probably run a bit slower!
So what are threads useful for in cpython? If your threads are I/O bound, that is, threads are often just sitting idle waiting to get results back from the disk drive, or fetching things from the Web, stuff like that, then you can get a benefit out of multithreading. The threads that want the CPU can use it while the idle threads sleep.
If you do want to utilize more than one core, or more than one CPU for that matter, you can do so in cpython with the multiprocessing module. This can be annoying though, since it is my understanding that each process will have to run its own instance of the Python runtime. I have not used this module, so I am not sure how communication among processes is handled. Traditionally, that is supposed to be one benefit of threads over processes, that all the threads share access to the heap.
It's not really for 5 year olds, but I hope this explanation is clear for someone who knows some programming basics.
I wouldn't say it's annoying, just don't expect the overhead to be like 3kb of memory or something.
The various types of creating a process are explained here in the doc docs.python.org/3.6/library/multip... - but long story short: you can either spawn a process (a new interpreter) or clone the existing (fork). The default is the clone.
Processes can communicate through queues or pipes or they can share parts of the memory (not recommended).
Queues and pipes are implemented using unix pipe, which is not that different from what happens when in Unix you do:
Those two totally unrelated processes communicate through a unix pipe.