The general rule of thumb, when all else is equal (or even vaguely close to equal), is to optimize for readability.
If you really must decide on the basis of performance, then the readable option wins in this case anyhow, because it gives the compiler the actual bytes it needs instead of an escape sequence which then needs to be converted into the actual bytes. Doing the conversion takes non-zero time, therefore not needing to convert will take less time. Post-compilation, the executed code will contain the actual bytes either way, so there will be no difference when the code is executed (unless you're inside a string eval or something like that which forces the escape sequence to be processed repeatedly).
But, although the conversion takes non-zero time, the time required is very near-zero. Even if your script is run millions of times, the aggregate difference in run time across all those runs will be less than the time it takes me to type this word. There's a good reason why people are so against micro-optimization - in the vast majority of cases, the time you spend to design and implement the optimization is orders of magnitude larger than the time saved when executing it.