Parallel execution is a database performance optimization where parts of query execution happen simultaneously across multiple processors or servers. By using parallel computing resources, query operations complete faster compared to serial execution.
Database engines utilize parallel execution frameworks and algorithms to coordinate query plan operations like scans, aggregations, sorts, etc in parallel. Goals include maximizing resource utilization and minimizing response times.
Parallel execution works in conjunction with query execution, distributed execution across clustered nodes, and partitioning strategies to achieve high performance at scale. Sophisticated engines can adaptively tune the degree of parallelism based on query complexity, data volume and system resources.
Database engines decompose query plans into stages or steps that can run concurrently like:
The query coordinator oversees dispatching work units to available resources and combining results.
With single-threaded serial execution, long running operations in a query can only use one CPU core leading to underutilization of modern multi-core hardware. Parallel execution enables efficiently leveraging all hardware resources.
By working in parallel, queries can see order-of-magnitude speedup compared to serial plans. This improves application performance and reduces user latency.
Scenarios where parallel execution helps:
Some downsides to enabling parallelism include:
Popular parallel execution frameworks:
Query execution is the process of carrying out the actual steps to retrieve results for a database query as per the generated execution plan.
Read more ->Database partitioning refers to splitting large tables into smaller, independent pieces called partitions stored across different filegroups, drives or nodes.
Read more ->Distributed execution refers to techniques to execute database queries efficiently across clustered servers or nodes, dividing work to utilize parallel resources.
Read more ->