Postgresql执行计划: testdb2=# explain select * from test; QUERY PLAN -------------------------------------------------------- Seq Scan on test (cost=0.00..18.50 rows=850 width=68) (1 row) Seq scan :表示顺序扫描,即全表扫描; cost=0.00..18.50: 第一个数字0.00表示启动的成本,也就是返回第一行需要多少cost值;第二行表示所有数据的成本,。 rows=850:表示返回850行。 width=68:表示每行平均宽度为68字节 cost描述一个sql执行的代价是多少,默认情况下,不同的操作其“cost”值如下: 顺序扫描一个数据块, cost 1 随机扫描一个数据块,cost 4 处理一个数据行的cpu,cost 0.01 处理一个索引行的cpu,cost 0.005 每个操作符的cpu代价为 0.0025 testdb2=# explain select a.id,b.id,a.col1,b.col1 from testtab05 a join testtab6 b on a.id = b.id ; QUERY PLAN ---------------------------------------------------------------------------- Merge Join (cost=176.34..303.67 rows=8064 width=72) Merge Cond: (a.id = b.id) -> Sort (cost=88.17..91.35 rows=1270 width=36) Sort Key: a.id -> Seq Scan on testtab05 a (cost=0.00..22.70 rows=1270 width=36) -> Sort (cost=88.17..91.35 rows=1270 width=36) Sort Key: b.id -> Seq Scan on testtab6 b (cost=0.00..22.70 rows=1270 width=36) (8 rows) testdb2=# explain select a.id,a.name,b.age from t_user a join student b on a.id = b.no; QUERY PLAN ------------------------------------------------------------------------ Hash Join (cost=22.82..129.36 rows=2451 width=86) Hash Cond: (b.no = a.id) -> Seq Scan on student b (cost=0.00..18.60 rows=860 width=8) -> Hash (cost=15.70..15.70 rows=570 width=82) -> Seq Scan on t_user a (cost=0.00..15.70 rows=570 width=82) (5 rows) testdb2=# explain analyze select a.id,b.age from t_user a join student b on a.id = b.no; QUERY PLAN ------------------------------------------------------------------------------------------------------------------ Hash Join (cost=22.82..129.36 rows=2451 width=86) (actual time=0.068..0.071 rows=4 loops=1) Hash Cond: (b.no = a.id) -> Seq Scan on student b (cost=0.00..18.60 rows=860 width=8) (actual time=0.042..0.043 rows=6 loops=1) -> Hash (cost=15.70..15.70 rows=570 width=82) (actual time=0.010..0.010 rows=3 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 9kB -> Seq Scan on t_user a (cost=0.00..15.70 rows=570 width=82) (actual time=0.005..0.006 rows=3 loops=1) Planning time: 0.130 ms Execution time: 0.234 ms (8 rows) testdb2=# explain (analyze true,buffers true) select a.id,b.age from t_user a join student b on a.id = b.no; QUERY PLAN ------------------------------------------------------------------------------------------------------------------ Hash Join (cost=22.82..129.36 rows=2451 width=86) (actual time=0.019..0.020 rows=4 loops=1) Hash Cond: (b.no = a.id) Buffers: shared hit=2 -> Seq Scan on student b (cost=0.00..18.60 rows=860 width=8) (actual time=0.006..0.006 rows=6 loops=1) Buffers: shared hit=1 -> Hash (cost=15.70..15.70 rows=570 width=82) (actual time=0.006..0.006 rows=3 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 9kB Buffers: shared hit=1 -> Seq Scan on t_user a (cost=0.00..15.70 rows=570 width=82) (actual time=0.003..0.004 rows=3 loops=1) Buffers: shared hit=1 Planning time: 0.064 ms Execution time: 0.038 ms (12 rows) shared hit:表示在共享内存中直接读到的块; 全表扫描: testdb2=# explain select * from testtab05; QUERY PLAN -------------------------------------------------------------- Seq Scan on testtab05 (cost=0.00..22.70 rows=1270 width=36) (1 row) 索引扫描: testdb2=# explain select * from jtest01 where id = 11 ; QUERY PLAN ----------------------------------------------------------- Seq Scan on jtest01 (cost=0.00..2584.00 rows=1 width=80) Filter: (id = 11) (2 rows) testdb2=# select count(*) from jtest01; count -------- 100000 (1 row) testdb2=# testdb2=# create index idx_jtest01_id on jtest01(id); CREATE INDEX testdb2=# explain select * from jtest01 where id = 11 ; QUERY PLAN ------------------------------------------------------------------------------- Index Scan using idx_jtest01_id on jtest01 (cost=0.29..8.31 rows=1 width=80) Index Cond: (id = 11) (2 rows) testdb2=# 位图扫描 把满足条件的行货块在内存中建一个位图,扫描完索引后,再根据位图到表的数据文件中把相应的数据读出来。如果走了两个索引,可以把两个索引形成的位图进行 and 或 or 计算,合并成一个位图,再到表的数据文件中读取数据。 当执行计划的结果行数很多时会进行这种扫描,如非等值查询、in子句或有多个条件可走不同的索引时。 以下方式并咩有走位图扫描。 testdb2=# explain select * from jtest01 where id > 50000; QUERY PLAN -------------------------------------------------------------------------------------- Index Scan using idx_jtest01_id on jtest01 (cost=0.29..2091.93 rows=49808 width=80) Index Cond: (id > 50000) (2 rows) testdb2=# explain select * from jtest01 where id > 5000; QUERY PLAN --------------------------------------------------------------- Seq Scan on jtest01 (cost=0.00..2584.00 rows=94967 width=80) Filter: (id > 5000) (2 rows) testdb2=# testdb2=# select count(*) from jtest01; count -------- 100000 (1 row) testdb2=# explain select count(*) from jtest01; QUERY PLAN --------------------------------------------------------------------- Aggregate (cost=2584.00..2584.01 rows=1 width=8) -> Seq Scan on jtest01 (cost=0.00..2334.00 rows=100000 width=0) (2 rows) testdb2=# explain select count(*) from jtest01 where id >4000; QUERY PLAN -------------------------------------------------------------------- Aggregate (cost=2823.83..2823.84 rows=1 width=8) -> Seq Scan on jtest01 (cost=0.00..2584.00 rows=95933 width=0) Filter: (id > 4000) (3 rows) 条件过滤: 条件过滤在执行计划中显示为 Filter 如果列上有索引,会走索引,不走过滤 testdb2=# explain select * from jtest01 where id in (select id from jtest02) and jdoc->>'name' like 'a%'; QUERY PLAN --------------------------------------------------------------------------- Hash Semi Join (cost=4180.00..7425.88 rows=500 width=80) Hash Cond: (jtest01.id = jtest02.id) -> Seq Scan on jtest01 (cost=0.00..2834.00 rows=500 width=80) Filter: ((jdoc ->> 'name'::text) ~~ 'a%'::text) -> Hash (cost=2539.00..2539.00 rows=100000 width=4) -> Seq Scan on jtest02 (cost=0.00..2539.00 rows=100000 width=4) (6 rows) nestloop Join(嵌套循环连接) 小表做驱动表,大表做被驱动表,大表上有建索引,连接字段上有建索引。小表数据一般小于10000 Hash Join 优化器使用两个表中较小的表,并利用连接键在内存中建立散列表,然后扫描较大的表并探测散列表,找出与散列表匹配的行。 适用于较小的表可以完全放于内存中的情况,这样总成本就是访问两个表的成本之和。 如果表很大,不能完全放入内存,优化器会将它分割成若干不同的分区,把不能放入内存的部分写入磁盘的临时段,此时要有较大的临时段以便提高io性能。 testdb2=# explain analyze select a.id,b.age from t_user a join student b on a.id = b.no; QUERY PLAN ------------------------------------------------------------------------------------------------------------------ Hash Join (cost=22.82..129.36 rows=2451 width=86) (actual time=0.068..0.071 rows=4 loops=1) Hash Cond: (b.no = a.id) -> Seq Scan on student b (cost=0.00..18.60 rows=860 width=8) (actual time=0.042..0.043 rows=6 loops=1) -> Hash (cost=15.70..15.70 rows=570 width=82) (actual time=0.010..0.010 rows=3 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 9kB -> Seq Scan on t_user a (cost=0.00..15.70 rows=570 width=82) (actual time=0.005..0.006 rows=3 loops=1) Planning time: 0.130 ms Execution time: 0.234 ms (8 rows) 先在较小的表t_user上建立散列表,然后在扫描较大的表 student 并探测散列表,找出与散列表匹配的行。 Merge Join 一般散列连接比合并连接效果好,但如果源数据上有索引,或者结果被排序好,在执行排序合并连接时就不需要排序了,这时合并排序的性能会优于散列连接。 testdb2=# explain select a.id,b.col1 from testtab05 a join testtab6 b on a.id = b.id ; QUERY PLAN ---------------------------------------------------------------------------- Merge Join (cost=176.34..303.67 rows=8064 width=72) Merge Cond: (a.id = b.id) -> Sort (cost=88.17..91.35 rows=1270 width=36) Sort Key: a.id -> Seq Scan on testtab05 a (cost=0.00..22.70 rows=1270 width=36) -> Sort (cost=88.17..91.35 rows=1270 width=36) Sort Key: b.id -> Seq Scan on testtab6 b (cost=0.00..22.70 rows=1270 width=36) (8 rows) 此时id上没有建索引, Sort Key:a.id,Sort key:b.id是对表中id字段进行排序。 建完索引后,则走了hash join testdb2=# create index idx_id_testtab6 on testtab6(id); CREATE INDEX testdb2=# create index idx_id_testtab05 on testtab05(id); CREATE INDEX testdb2=# explain select a.id,b.col1 from testtab05 a join testtab6 b on a.id = b.id testdb2-# ; QUERY PLAN ------------------------------------------------------------------------ Hash Join (cost=1.07..2.18 rows=3 width=72) Hash Cond: (b.id = a.id) -> Seq Scan on testtab6 b (cost=0.00..1.06 rows=6 width=36) -> Hash (cost=1.03..1.03 rows=3 width=36) -> Seq Scan on testtab05 a (cost=0.00..1.03 rows=3 width=36) (5 rows) 通常情况下,Postsgresql都不会走错的执行计划。Postgresql走错的执行的执行计划是统计信息收集不及时导致的,可通过频繁运行ANALYZE来解决这个问题,使用“ENABLE_”只是一个临时方法。 统计信息的收集 表和索引的行数、块数等统计信息记录再系统表pg_class中,其它的统计信息主要收集在系统表pg_statistic中。 1.统计信息收集器的配置项 2.sql执行的统计信息输出 3.手工收集统计信息 手工收集统计信息的命令是analyze,此命令收集表的统计信息,然后把结果存在系统表pg_statistic里。 postgresql中,autovacuum守护进程是打开的,它自动分析表,并收集统计信息。当autovacuum关闭时,需要周期地,或在表的大部分内容变更后运行ANALYZE命令。 常用的策略:每天在数据库比较空闲的时候运行一次VACUUM和ANALYZE。 testdb2=# VACUUM; VACUUM testdb2=# ANALYZE; ANALYZE ANALYZE命令格式: ANALYZE [VERBOSE] [table [(column [,...])]] VERBOSE:显示处理的进度,及表的一些统计信息 table:要分析的表名,不指定则默认所有的表 column:要分析的特定字段,默认所有的字段。 如:ANALYZE test(id,col1);