in reply to Best way to store/access large dataset?

You still haven't really clearly said what the (filtering) query would look like.

So after enlarging your attributes.txt to 1 million rows I used postgres as a front-end to read that whole file (via file_fdw).

To read and "calculate" the whole file takes 2 seconds.

This is the SQL I used:

select attribute as attr , _1_file_ext + _4_file_ext + _13_file_ext + _16_file_ext as square , _2_file_ext + _5_file_ext + _11_file_ext + _12_file_ext as triangle , _3_file_ext + _6_file_ext + _7_file_ext + _10_file_ext as circle , _8_file_ext + _9_file_ext + _14_file_ext + _15_file_ext as rectangle from public.atts

A more interesting part would probably be a WHERE-clause, or possibly an ORDER BY clause, that you would need but I don't know how that would look from what you've said so far.

UPDATE: I typoed the order of the column names so fixed that.