PostgreSQL11 编译支持JIT功能

JIT  just-in-time 即时编译功能

    JIT在大数据集的查询条件下,可能迅速提升查询速度的作用。但是它也不是任何情况下都能提效的,可以参考这篇  https://www.postgresql.org/docs/11/jit-decision.html

下面,我以编译PG11开启JIT为例演示下JIT的性能提升效果

注意:JIT的功能需要在编译的时候就开启 jit的支持,Postgresql documentation 说明LLVM最低版本需要3.9

wget http://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm

 

yum localinstall epel-release-latest-7.noarch.rpm

 

yum install llvm5.0 llvm5.0-devel clang

cd /root/pg_sources/postgresql-11    # 切换到pg11的源码的路径下,执行编译操作 


./configure --prefix=/usr/local/pgsql-11 \

--with-python --with-perl --with-tcl --with-pam \

--with-openssl --with-libxml --with-libxslt \

--with-llvm LLVM_CONfig='/usr/lib64/llvm5.0/bin/llvm-config'

# 如果有缺少依赖包等报错,可以参考网上的资料补充后,再次执行 configure 命令。

 

修改配置文件,开启JIT的参数。修改后,重启PG,查看到的参数设置值如下:

postgres=# select name,setting from pg_settings where name like 'jit%';

          name           | setting

-------------------------+---------

 jit                     | on

 jit_above_cost          | 100000

 jit_debugging_support   | off

 jit_dump_bitcode        | off

 jit_expressions         | on

 jit_inline_above_cost   | 500000

 jit_optimize_above_cost | 500000

 jit_profiling_support   | off

 jit_provider            | llvmjit

 jit_tuple_deforming     | on

(10 rows)

 

德哥给出的测试样例  https://github.com/digoal/blog/blob/master/201910/20191017_01.md

 

下面是我自己实际测试的(CenOS7+PG11+普通SATA硬盘,PG就设置了shared_buffer=8GB 没有做其它的参数优化,直接开搞)


造些测试数据:

create table a(id int, info text, crt_Time timestamp, c1 int); 


insert into a select generate_series(1,100000000),'test',Now(),random()*100;   -- 也不加索引了,纯靠PG自己来硬抗


analyze a; 

 

\dt+ a

                    List of relations

 Schema | Name | Type  |  Owner   |  Size   | Description

--------+------+-------+----------+---------+-------------

 public | a    | table | postgres | 5746 MB |

(1 row)

 

 

 

在开启jit的PG11上的效果

set jit=on; 

set max_parallel_workers_per_gather =32; 

alter table a set (parallel_workers =32); 

set min_parallel_table_scan_size =0; 

set min_parallel_index_scan_size =0; 

set parallel_setup_cost =0; 

set parallel_tuple_cost =0; 

 

postgres=# select t1.c1,count(*) from a t1 join a t2 using (id) group by t1.c1; 

Time: 31402.562 ms (00:31.403)

 

postgres=# explain select t1.c1,count(*) from a t1 join a t2 using (id) group by t1.c1; 

                                                 QUERY PLAN

------------------------------------------------------------------------------------------------------------

 Finalize GroupAggregate  (cost=1657122.68..1657229.70 rows=101 width=12)

   Group Key: t1.c1

   ->  Gather Merge  (cost=1657122.68..1657212.53 rows=3232 width=12)

         Workers Planned: 32

         ->  Sort  (cost=1657121.85..1657122.10 rows=101 width=12)

               Sort Key: t1.c1

               ->  Partial HashAggregate  (cost=1657117.48..1657118.49 rows=101 width=12)

                     Group Key: t1.c1

                     ->  Parallel Hash Join  (cost=817815.59..1641492.46 rows=3125004 width=4)

                           Hash Cond: (t1.id = t2.id)

                           ->  Parallel Seq Scan on a t1  (cost=0.00..766545.04 rows=3125004 width=8)

                           ->  Parallel Hash  (cost=766545.04..766545.04 rows=3125004 width=4)

                                 ->  Parallel Seq Scan on a t2  (cost=0.00..766545.04 rows=3125004 width=4)

 JIT:

   Functions: 23

   Options: Inlining true, Optimization true, Expressions true, Deforming true

(16 rows)

 

 

postgres=# select t1.c1,count(*) from a t1 join a t2 on (t1.id=t2.id and t1.c1=2 and t2.c1=2) group by t1.c1;

 c1 |  count 

----+---------

  2 | 1000506

(1 row)

 

Time: 4780.824 ms (00:04.781)

 

postgres=# select * from a order by c1,id desc limit 10;

    id    | info |          crt_time          | c1

----------+------+----------------------------+----

 99999958 | test | 2019-10-18 09:22:32.391061 |  0

 99999926 | test | 2019-10-18 09:22:32.391061 |  0

 99999901 | test | 2019-10-18 09:22:32.391061 |  0

 99999802 | test | 2019-10-18 09:22:32.391061 |  0

 99999165 | test | 2019-10-18 09:22:32.391061 |  0

 99999100 | test | 2019-10-18 09:22:32.391061 |  0

 99998968 | test | 2019-10-18 09:22:32.391061 |  0

 99998779 | test | 2019-10-18 09:22:32.391061 |  0

 99998652 | test | 2019-10-18 09:22:32.391061 |  0

 99998441 | test | 2019-10-18 09:22:32.391061 |  0

(10 rows)

Time: 3317.480 ms (00:03.317)

 

postgres=# select c1,count(*) from a group by c1; 

Time: 5031.796 ms (00:05.032)

 

 

在未编译jit的PG11上的效果

postgres=#  select t1.c1,count(*) from a t1 join a t2 using (id) group by t1.c1; 

Time: 71410.034 ms (01:11.410)

 

postgres=# explain  select t1.c1,count(*) from a t1 join a t2 using (id) group by t1.c1;

                                                  QUERY PLAN

--------------------------------------------------------------------------------------------------------------

 Finalize GroupAggregate  (cost=6150282.43..6150308.02 rows=101 width=12)

   Group Key: t1.c1

   ->  Gather Merge  (cost=6150282.43..6150306.00 rows=202 width=12)

         Workers Planned: 2

         ->  Sort  (cost=6149282.41..6149282.66 rows=101 width=12)

               Sort Key: t1.c1

               ->  Partial HashAggregate  (cost=6149278.03..6149279.04 rows=101 width=12)

                     Group Key: t1.c1

                     ->  Parallel Hash Join  (cost=1835524.52..5940950.58 rows=41665490 width=4)

                           Hash Cond: (t1.id = t2.id)

                           ->  Parallel Seq Scan on a t1  (cost=0.00..1151949.90 rows=41665490 width=8)

                           ->  Parallel Hash  (cost=1151949.90..1151949.90 rows=41665490 width=4)

                                 ->  Parallel Seq Scan on a t2  (cost=0.00..1151949.90 rows=41665490 width=4)

(13 rows)

Time: 0.636 ms

 

 

postgres=# select t1.c1,count(*) from a t1 join a t2 on (t1.id=t2.id and t1.c1=2 and t2.c1=2) group by t1.c1;

 c1 |  count 

----+---------

  2 | 1001209

(1 row)

Time: 9329.623 ms (00:09.330)

 

postgres=# select * from a order by c1,id desc limit 10;

    id    | info |          crt_time          | c1

----------+------+----------------------------+----

 99999518 | test | 2019-10-18 09:18:36.532469 |  0

 99999088 | test | 2019-10-18 09:18:36.532469 |  0

 99999016 | test | 2019-10-18 09:18:36.532469 |  0

 99998987 | test | 2019-10-18 09:18:36.532469 |  0

 99998899 | test | 2019-10-18 09:18:36.532469 |  0

 99998507 | test | 2019-10-18 09:18:36.532469 |  0

 99998142 | test | 2019-10-18 09:18:36.532469 |  0

 99998107 | test | 2019-10-18 09:18:36.532469 |  0

 99998050 | test | 2019-10-18 09:18:36.532469 |  0

 99997437 | test | 2019-10-18 09:18:36.532469 |  0

(10 rows)

Time: 6113.971 ms (00:06.114)

 

 

postgres=# select c1,count(*) from a group by c1; 

Time: 9868.117 ms (00:09.868)

 

 从上面的测试结果看,基本上, 对于大数据集的JOIN之类的复杂 查询, 用了JIT后, 查询速度在原有的基础上再缩短至少一半。

日常的OLTP+OLAP需求,一套PG11全搞定。

 

相关文章

今天小编给大家分享一下excel图案样式如何设置的相关知识点,...
这篇文章主要讲解了“win10设置过的壁纸如何删除”,文中的讲...
这篇“Xmanager怎么显示远程linux程序的图像”文章的知识点大...
今天小编给大家分享一下xmanager怎么连接linux的相关知识点,...
这篇“如何重置Linux云服务器的远程密码”文章的知识点大部分...
本篇内容介绍了“Linux云服务器手动配置DNS的方法是什么”的...