如下,这种该如何优化,添加 open_time 和 close_time 的索引,但是还是比较慢,查询时候大约 500ms.
select count(1) from switchcover where open_time <= '2024-01-07 23:59:59' and close_time>= '2024-01-07 00:00:00' ;
1
vanpeisi7 OP 单纯的加上 limit 和 offset 分页查询是很快的,几十 ms ,就是 count 太耗时了。
select * from switch.bz_switchcover where close_time>= '2023-05-01 00:00:00' and open_time <= '2323-05-01 23:59:59' order by open_time desc limit 15 offset 0 ; |
2
whoami9426 297 天前
贴下 explain
|
3
vanpeisi7 OP @whoami9426
``` Finalize Aggregate (cost=362425.35..362425.36 rows=1 width=8) -> Gather (cost=362425.14..362425.35 rows=2 width=8) Workers Planned: 2 -> Partial Aggregate (cost=361425.14..361425.15 rows=1 width=8) -> Parallel Index Only Scan using bz_switchcover_open_close_time_c_idx on bz_switchcover (cost=0.43..358436.32 rows=1195526 width=0) Index Cond: ((open_time <= '2023-05-11 23:59:59'::timestamp without time zone) AND (close_time >= '2023-05-11 00:00:00'::timestamp without time zone)) JIT: Functions: 5 Options: Inlining false, Optimization false, Expressions true, Deforming true ``` |
4
vanpeisi7 OP 详细的 explain:
Finalize Aggregate (cost=425545.45..425545.46 rows=1 width=8) (actual time=522.040..573.976 rows=1 loops=1) -> Gather (cost=425545.23..425545.44 rows=2 width=8) (actual time=521.286..573.949 rows=3 loops=1) Workers Planned: 2 Workers Launched: 2 -> Partial Aggregate (cost=424545.23..424545.24 rows=1 width=8) (actual time=494.321..494.322 rows=1 loops=3) -> Parallel Index Only Scan using bz_switchcover_open_close_time_c_idx on bz_switchcover (cost=0.43..421581.16 rows=1185629 width=0) (actual time=20.590..487.712 rows=9267 loops=3) Index Cond: ((open_time <= '2023-07-16 23:59:59'::timestamp without time zone) AND (close_time >= '2023-07-16 00:00:00'::timestamp without time zone)) Heap Fetches: 18 Planning Time: 0.184 ms JIT: Functions: 11 Options: Inlining false, Optimization false, Expressions true, Deforming true Timing: Generation 1.365 ms, Inlining 0.000 ms, Optimization 1.067 ms, Emission 17.333 ms, Total 19.766 ms Execution Time: 574.577 ms |
5
Huelse 297 天前
你应该 create index where close_time>= '2024-01-07 00:00:00'::timestamp;
|
6
whoami9426 297 天前
看看这篇 [理解 PostgreSQL 的 count 函数的行为]( https://zhuanlan.zhihu.com/p/63379010)
如果表中数据量本身就很大, count 耗时长是难免的 |
7
opengps 297 天前
如果没有时间差字段,那么最好是有这俩时间列的联合索引。
如果有时间差字段,那么开始时间和时间差字段的组合要比直接操作两个 time 类型容易很多,这里也得有联合索引辅助 |
8
MoYi123 297 天前
这个 sql 什么意思, 感觉逻辑很奇怪, 一般来说 open_time 应该小于 close_time 吧
总之想个办法, 把其中一个时间改成 between 一个较小的时间段, 性能应该就会好不少. 现在是 2 个大 set 取交集, 肯定很慢. |
9
siweipancc 296 天前 via iPhone
给个建议,count(id)
|
10
siweipancc 296 天前 via iPhone
|
11
Belmode 296 天前
select count(1) from (select id, close_time from switchcover where open_time <= '2024-01-07 23:59:59') t where t.close_time>= '2024-01-07 00:00:00' ;
要不这样试试? |