Since your daily CSV file will only provide column headings (that is, field names), and not data types (integer vs. string vs. date, or maximum field width for strings), your main issue will be having to set up a "CREATE TABLE" statement that handles the longest string that might occur in the daily CSV file, and having to give up on using appropriate data types -- every field will have to be "varchar" (which means you can't make use of type-specific functions in mysql for things like dates and numbers when you query this table).
(Well, maybe the field names might help for figuring what what data types to use, and/or maybe you can do some heuristics, reading the CSV file once in advance to guess at appropriate data types for some fields, but maybe this isn't an issue for you anyway.)
Since you're looking at Text::CSV_XS, maybe something like this would get you started:
use strict;
use Text::CSV_XS;
use DBI;
my $db = DBI->connect( ... );
$db->do( "drop table if exists new_table" );
my $csv = Text::CSV_XS->new();
open( my $infile, "filename.csv" ) or die "filename.csv: $!";
# get CSV header
my $hdr = $csv->getline( $infile );
my $create = "create table new_table (" .
join( " varchar(255),", @$hdr ) . " varchar(255))";
$db->do( $create );
# ... at this point, you could prepare an insert statement
# for mysql, loop over $csv->getline() and execute the insert
# for each row.
#
# but if you use LOAD DATA INFILE instead, it'll be much faster ...