Beefy Boxes and Bandwidth Generously Provided by pair Networks
Perl: the Markov chain saw
 
PerlMonks  

Re: CGI Program To Delete Old Data From MySQL Table?

by erix (Vicar)
on Sep 30, 2013 at 17:10 UTC ( #1056397=note: print w/ replies, xml ) Need Help??


in reply to CGI Program To Delete Old Data From MySQL Table?

UPDATE: Oops. You *did* mention the DBMS, in the title no less. I didn't see that. So this is a little off topic -- sorry about that.

In PostgreSQL:

delete from table where now() - column > interval '60 days';

I wrote a little standalone bash to show that off. It creates 100 days worth of rows, shows some counts, DELETEs with an appropriate interval expression, then counts again:

(careful: it drops table t)

#!/bin/sh echo " drop table if exists t; -- dropping a table, be careful! create table t as select d from generate_series( current_timestamp - interval '99 days' , current_timestamp , interval '1 day' ) as f(d) ; -- UPDATE: add this for index tests: -- create index t_d_idx on t (d); select count(*) , sum( case when now() - d > interval '60 days' then 0 else 1 end ) as records_to_keep , sum( case when now() - d > interval '60 days' then 1 else 0 end ) as records_to_dump from t; delete from t where now() - d > interval '60 days' ; select count(*) , sum( case when now() - d > interval '60 days' then 0 else 1 end ) as records_to_keep , sum( case when now() - d > interval '60 days' then 1 else 0 end ) as records_to_dump from t; " | psql -X

Output:

DROP TABLE SELECT 100 count | records_to_keep | records_to_dump -------+-----------------+----------------- 100 | 60 | 40 (1 row) DELETE 41 count | records_to_keep | records_to_dump -------+-----------------+----------------- 60 | 60 | 0 (1 row)


Comment on Re: CGI Program To Delete Old Data From MySQL Table?
Select or Download Code
Re^2: CGI Program To Delete Old Data From MySQL Table?
by Jenda (Abbot) on Oct 01, 2013 at 09:16 UTC

    Unless PostgreSQL is extremely clever, this query effectively prevents the use of an index on the date column. You should use something like (MSSQL syntax, untested)

    delete from table where column < DateAdd(day,-60,getdate());

    Jenda
    Enoch was right!
    Enjoy the last years of Rome.

      I did think of including that optimization but I thought it was better (because conceptually easier) to show what I did show (the idea was to show the ease of use of the postgres interval datatype).

      And remember that index-retrieval is not always faster. SeqScan is better than Index-retrieval when hitting a large part of the table (like when deleting 40 out of 100 rows (or keeping 60 days out of 'hundreds' of rows as the OP mentions)). Of course there is no way to know what the distribution in the OP's table is.

      Indeed, for my example, it turns out seq scan is still preferred over index, with my original date data. Only if the number of deleted rows becomes small compared to the total rowcount, does Pg use the index. So yes, PostgreSQL /is/ extremely clever ;-)

      The index-usable statement for postgres could be:

      delete from t where d < now() - interval '2';

      I tweaked my little program to accept an arg1=number of created rows and an arg2=number of rows to keep.

      $ pm/1056374.sh 99 60 # create table with 99 rows, keep 60 DROP TABLE SELECT 100 CREATE INDEX ANALYZE count | records_to_keep | records_to_dump -------+-----------------+----------------- 100 | 60 | 40 (1 row) QUERY PLAN -------------------------------------------------------- Delete on t (cost=0.00..2.75 rows=39 width=6) -> Seq Scan on t (cost=0.00..2.75 rows=39 width=6) Filter: (d <= (now() - '60 days'::interval)) (3 rows) DELETE 40 count | records_to_keep | records_to_dump -------+-----------------+----------------- 60 | 60 | 0 (1 row) $ pm/1056374.sh 999 990 # create table with 999 rows, keep 990 DROP TABLE SELECT 1000 CREATE INDEX ANALYZE count | records_to_keep | records_to_dump -------+-----------------+----------------- 1000 | 990 | 10 (1 row) QUERY PLAN ---------------------------------------------------------------------- +----- Delete on t (cost=0.28..8.45 rows=10 width=6) -> Index Scan using d_date_idx on t (cost=0.28..8.45 rows=10 widt +h=6) Index Cond: (d <= (now() - '990 days'::interval)) (3 rows) DELETE 10 count | records_to_keep | records_to_dump -------+-----------------+----------------- 990 | 990 | 0 (1 row)

        First, it depends on whether the index is clustered. Second, in case of a script that's apparently supposed to be run regularly and delete the old records, it's safe to assume that the number of records to delete will be small compared to the total number of record. 1/61 in case the script is run daily, 7/67 if weekly.

        Jenda
        Enoch was right!
        Enjoy the last years of Rome.

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://1056397]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others studying the Monastery: (15)
As of 2014-07-23 19:19 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    My favorite superfluous repetitious redundant duplicative phrase is:









    Results (152 votes), past polls