dispy is a comprehensive, yet easy to use framework for creating and using compute clusters to execute computations in parallel across multiple processors in a single machine (SMP), among many machines in a cluster, grid or cloud.
dispy is well suited for data parallel (SIMD) paradigm where a computation (Python function or standalone program) is evaluated with different (large) datasets independently with no communication among computation tasks (except for computation tasks sending Provisional/Intermediate Results or Transferring Files to the client).
If communication/cooperation among tasks is needed, Distributed Communicating Processes module of pycos framework could be used.
Some of the features of dispy:
dispy is implemented with pycos, an independent framework for asynchronous, concurrent, distributed, network programming with tasks (without threads). pycos uses non-blocking sockets with I/O notification mechanisms epoll, kqueue, poll and Windows I/O Completion Ports (IOCP) for high performance and scalability, so dispy works efficiently with a single node or large cluster(s) of nodes - one user reported using dispy with 500 nodes in Google cloud platform. pycos itself has support for distributed/parallel computing, including transferring computations, files etc., and message passing (for communicating with client and other computation tasks). While dispy can be used to schedule jobs of a computation to get the results, pycos can be used to create distributed communicating processes, for broad range of use cases, including in-memory processing, data streaming, real-time (live) analytics.
Computations (Python functions or standalone programs) and their dependencies (files, Python functions, classes, modules) are distributed to nodes automatically. Computations, if they are Python functions, can also transfer files on the nodes to the client.
Computation nodes can be anywhere on the network (local or remote). For security, either simple hash based authentication or SSL encryption can be used.
After each execution is finished, the results of execution, output, errors and exception trace are made available for further processing.
In-memory processing is supported (with some limitations under Windows); i.e., computations can work on data in memory instead of loading data from files each time.
Nodes may become available dynamically: dispy will schedule jobs whenever a node is available and computations can use that node.
Job and cluster status notification mechanisms allow for asynchronous processing of job results, customized job schedulers etc.
Client-side and server-side fault recovery are supported:
If user program (client) terminates unexpectedly (e.g., due to uncaught exception), the nodes continue to execute scheduled jobs. The results of the scheduled (but unfinished at the time of crash) jobs for that cluster can be retrieved easily with (Fault) Recover Jobs.
If a computation is marked reentrant when a cluster is created and a node (server) executing jobs for that computation fails, dispy automatically resubmits those jobs to other available nodes.
dispy can be used in a single process to use all the nodes exclusively (with JobCluster) or in multiple processes simultaneously sharing the nodes (with SharedJobCluster and dispyscheduler (Shared Execution program)).
Monitor and Manage Cluster with a web browser, including in iOS or Android devices.
Setup private compute infrastructure with existing hardware (the only requirements are that computers are connected and have Python installed), or use external cloud computing services (users reported using dispy with Amazon EC2, Google Cloud and Microsoft Azure), either exclusively or in addition to any local compute nodes. See Cloud Computing for details.
dispy works with Python versions 2.7+ and 3.1+ and tested on Linux, OS X and Windows; it may work on other platforms too. dispy works with JIT interpreter PyPy as well.