A coworker of mine found himself in a spot where index used by his database query was working as intended BUT the performance was still too slow.
What are you supposed to do in this situation?
The easiest thing to do is to play around with the query/indices and try out some query rewriting tricks.
If you’re using PostgreSQL or MySQL this would be done by placing EXPLAIN before your query.
Does your query contain an inner join and one table is much smaller than another? Try using a left join and filtering in the where clause instead.
Be very careful with sorting, if your query involves sorting you can rearrange your index to match the order by using ASC/DESC flags on indexed columns.
Try experimenting with moving where clause entries into join clauses instead if possible. This could dramatically change the query plan.
Make sure the statistics on your tables is up to date on a regular basis; auto-vacuum jobs should be run at a suitable interval for your DB.
Look into what your worker_mem is set to; when using EXPLAIN try to see what buffer space you currently have when running the query.
If it involves an aggregate and there’s just too many rows…you’ll have to get creative: look into materialized views + calculate diffs based on what changed inside the aggregate, leverage caching outside of the DB or create precomputed subtables for fast aggregates.