Beefy Boxes and Bandwidth Generously Provided by pair Networks
Perl-Sensitive Sunglasses

Re^4: CGI Program To Delete Old Data From MySQL Table?

by Jenda (Abbot)
on Oct 01, 2013 at 22:29 UTC ( #1056547=note: print w/replies, xml ) Need Help??

in reply to Re^3: CGI Program To Delete Old Data From MySQL Table?
in thread CGI Program To Delete Old Data From MySQL Table?

First, it depends on whether the index is clustered. Second, in case of a script that's apparently supposed to be run regularly and delete the old records, it's safe to assume that the number of records to delete will be small compared to the total number of record. 1/61 in case the script is run daily, 7/67 if weekly.

Enoch was right!
Enjoy the last years of Rome.

  • Comment on Re^4: CGI Program To Delete Old Data From MySQL Table?

Replies are listed 'Best First'.
Re^5: CGI Program To Delete Old Data From MySQL Table?
by erix (Vicar) on Oct 02, 2013 at 08:04 UTC

    We're descending a bit in database arcana (and not even the OPs database's), but what the hell:

    Small tables don't use indexes. This 60-rows is a small table. I can't get Postgres to use an index for either 1/61 or 7/67 rows. (I even tried broadening the row with more columns, but no). Index-use only starts at a few hundred rows, for my example. A quick google search makes me think Oracle behaves the same. MySQL I'm not sufficiently interested in to go lookup; it's probably the same.

    I don't see what clustered indexes have to do with the whole thing. Clustering reorganize the physical table data according to the index ordering. Obviously that can make things faster, but I rather suspect that clusteredness of an index has no bearing on plan choice.

      If the table has 60 rows, then any plan and any query will do. I was talking about one 61th of rows, not a single row in the whole of total 61 rows. If the table needs to be periodically cleared of old rows, it probably grows quicker than one row a day.

      Clustered indexes do have something to do with the thing. If you use a non-clustered index, then the index basically only contains references to the rows. Therefore for the index to be beneficial to the query execution, the number of selected rows has to be smaller compared to the total number of rows, than for a clustered index. Though with a delete it's probably a little different at least with MSSQL because deleting a whole range of rows means a fairly big change to the tree ad if the table is wide enough ...

      I just tested it on MSSQL with a 600 row table (fairly wide) and could not get it to stop using even the nonclustered index even when the condition was expected to select 98% rows. Not even with a 10000 rows table (same width, the table was created by a SELECT TOP xxx FROM a real table).

      I guess whoever ends up doing the actual job will need to test it out and find out what works best in this particular situation. In either case I would recommend writing queries so that the server may decide to use an index. No matter whether the index exists at the moment or not. Unless of course it means you have to bend backwards to write it like that.

      Enoch was right!
      Enjoy the last years of Rome.

Log In?

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://1056547]
[Corion]: ... values to be used. For example, I think for headers, one would want to have various kinds of Content-Encoding headers, but for the get_parameters one would have various kinds of Bobby Tables
[choroba]: What about [metadoc:// Algorithm::Loops]?
[Corion]: choroba: Yeah, but handing off the request to Dancer,Plack, Mojolicious,LWP is easy once I have the data filled into some structure ;))
[choroba]: Algorithm::Loops
[Corion]: choroba: I'm using that to generate the permutations, but I don't know how the user can pass the intended values to my function in a sane way
[Corion]: I have a prototype that permutes the get_parameters, but the values used for the get parameters should be different from the values used for the headers and potentially for parts of the URL
[Corion]: But yes, in general, my approach will be "split the URL into another set of parameters, generate an array of allowed values for each parameter and then NestedLoops() over the set"
[choroba]: hmm... so you need something like bag from Test::Deep, but not for checking, but for generation
[Corion]: This has the dual use of easily requesting sequential URLs and also being suitable for testing
[Corion]: For testing, I want to skip all tests with the same value(s) once one test fails to cut down on the number of failing tests

How do I use this? | Other CB clients
Other Users?
Others wandering the Monastery: (9)
As of 2017-01-17 08:16 GMT
Find Nodes?
    Voting Booth?
    Do you watch meteor showers?

    Results (152 votes). Check out past polls.