Others have pointed out your misunderstanding of next. Here are some other pointers.
You could use join and the string multiplier (see Multiplicative Operators in perlop) to save a lot of typing in your printVariables subroutine.
$ perl -e '
> sub printVariables
> {
> print join qq{\n}, @_, q{#} x 10, q{};
> }
>
> $v1 = 123;
> $v2 = 456;
> printVariables( $v1, $v2 );
>
> @arr = qw{ pete john mike };
> printVariables( @arr );'
123
456
##########
pete
john
mike
##########
$
You don't seem to call it but your processLine subroutine goes a very long way around the houses to achieve the same result as a
chomp $line;
in the body of your code would have done.
You don't need to unlink a pre-existing file if you are about to open it for writing.
You open "c:\\nursing_homes.txt" for reading and process it to remove blank lines writing the changes to "c:\\nursing_homes_out.txt" which you then re-open and read in your database insertion loop. Unless you need that processed file elsewhere, why bother? Just work on the original file in your main database insertion loop and include the next if $_ =~ /^\s*$/; line there.
Why do you initialse $query_string but not use it when doing the my $query_handle = $connect->prepare( ... ); line instead of re-typing exactly the same code again? Seems a bit wasteful of effort to me.
Rather than using concatenation
... die "Can't open file " . $out_file . "\n$!\n";
just interpolate into the string as you've already done with the $! variable
... die "Can't open file $out_file\n$!\n";
I hope these point are helpful.
Update: Corrected cut'n'paste error where I'd copied an earlier piece of test code with a shorter subroutine name in the call, pvar rather than printVariables