http://www.perlmonks.org?node_id=1056455


in reply to Re: CGI Program To Delete Old Data From MySQL Table?
in thread CGI Program To Delete Old Data From MySQL Table?

Unless PostgreSQL is extremely clever, this query effectively prevents the use of an index on the date column. You should use something like (MSSQL syntax, untested)

delete from table where column < DateAdd(day,-60,getdate());

Jenda
Enoch was right!
Enjoy the last years of Rome.

Replies are listed 'Best First'.
Re^3: CGI Program To Delete Old Data From MySQL Table?
by erix (Prior) on Oct 01, 2013 at 11:19 UTC

    I did think of including that optimization but I thought it was better (because conceptually easier) to show what I did show (the idea was to show the ease of use of the postgres interval datatype).

    And remember that index-retrieval is not always faster. SeqScan is better than Index-retrieval when hitting a large part of the table (like when deleting 40 out of 100 rows (or keeping 60 days out of 'hundreds' of rows as the OP mentions)). Of course there is no way to know what the distribution in the OP's table is.

    Indeed, for my example, it turns out seq scan is still preferred over index, with my original date data. Only if the number of deleted rows becomes small compared to the total rowcount, does Pg use the index. So yes, PostgreSQL /is/ extremely clever ;-)

    The index-usable statement for postgres could be:

    delete from t where d < now() - interval '2';

    I tweaked my little program to accept an arg1=number of created rows and an arg2=number of rows to keep.

    $ pm/1056374.sh 99 60 # create table with 99 rows, keep 60 DROP TABLE SELECT 100 CREATE INDEX ANALYZE count | records_to_keep | records_to_dump -------+-----------------+----------------- 100 | 60 | 40 (1 row) QUERY PLAN -------------------------------------------------------- Delete on t (cost=0.00..2.75 rows=39 width=6) -> Seq Scan on t (cost=0.00..2.75 rows=39 width=6) Filter: (d <= (now() - '60 days'::interval)) (3 rows) DELETE 40 count | records_to_keep | records_to_dump -------+-----------------+----------------- 60 | 60 | 0 (1 row) $ pm/1056374.sh 999 990 # create table with 999 rows, keep 990 DROP TABLE SELECT 1000 CREATE INDEX ANALYZE count | records_to_keep | records_to_dump -------+-----------------+----------------- 1000 | 990 | 10 (1 row) QUERY PLAN ---------------------------------------------------------------------- +----- Delete on t (cost=0.28..8.45 rows=10 width=6) -> Index Scan using d_date_idx on t (cost=0.28..8.45 rows=10 widt +h=6) Index Cond: (d <= (now() - '990 days'::interval)) (3 rows) DELETE 10 count | records_to_keep | records_to_dump -------+-----------------+----------------- 990 | 990 | 0 (1 row)

      First, it depends on whether the index is clustered. Second, in case of a script that's apparently supposed to be run regularly and delete the old records, it's safe to assume that the number of records to delete will be small compared to the total number of record. 1/61 in case the script is run daily, 7/67 if weekly.

      Jenda
      Enoch was right!
      Enjoy the last years of Rome.

        We're descending a bit in database arcana (and not even the OPs database's), but what the hell:

        Small tables don't use indexes. This 60-rows is a small table. I can't get Postgres to use an index for either 1/61 or 7/67 rows. (I even tried broadening the row with more columns, but no). Index-use only starts at a few hundred rows, for my example. A quick google search makes me think Oracle behaves the same. MySQL I'm not sufficiently interested in to go lookup; it's probably the same.

        I don't see what clustered indexes have to do with the whole thing. Clustering reorganize the physical table data according to the index ordering. Obviously that can make things faster, but I rather suspect that clusteredness of an index has no bearing on plan choice.