|Pathologically Eclectic Rubbish Lister|
On the question itself:
In that situation, I would add tests as convenient in any available free time (hah!) until a bug was found or the code needed to be changed for any other reason.
If the change is triggered by a bug, create one or more test cases to demonstrate the bug and prove both that it exists and that you know how to trigger it before making any changes to the code. (Proponents of test-driven development would classify "it should have a new feature, but doesn't" as a bug and perform this step for any change. While I find TDD to be a useful practice in many cases, you can still do good testing without embracing it.) These tests will tell you when you have fixed the bug.
Regardless of the cause of the change (bug, new feature, whatever), create additional tests for each subroutine before making any changes to that subroutine. These tests should verify that, for a full range of both valid and invalid inputs, you get the expected output. These tests will tell you whether you broke anything new in the course of making the changes (and you may discover some additional bugs in the process of writing the tests).
By creating tests as needed before making any changes, you will eventually build up a solid test suite for your code, or at least for the portions which are subject to change.
On the side question of users not running the tests:
That's not really a major issue in most cases. Tests are tools for the developer's use and, in most cases, knowing that they run successfully on your machine(s) is sufficient.
But there are always the exceptions, when some environmental issue brings out a bug on a user's system which doesn't occur on yours. In these cases, you need to identify the environmental issue and attempt to duplicate it on your development system in order to debug it - this holds regardless of whether you're doing automated testing or not.
However, if you have your test suite, you can turn that into a compiled .exe and ask the affected user(s) to run it. If any tests fail there, then that gives you a head start towards isolating the problem and identifying its underlying cause. And, as with any bug, once you're able to emulate the cause and trigger the bug in your test suite, it will stay there and help to avoid the introduction of similar bugs (or re-introduction of the same one) in later versions of the code.