Just another Perl shrine PerlMonks

### Normalizing a range of numbers to a percentage

by stevieb (Canon)
 on Mar 07, 2019 at 22:56 UTC Need Help??

stevieb has asked for the wisdom of the Perl Monks concerning the following question:

Hey again esteemed Monks,

I've got yet another mathematical question which once again, should be quite easy.

I'm adding a digital potentiometer to my Raspberry Pi CI unit test platform. This pot has 256 taps (0-255). (If you're unsure what a potentiometer is, imagine in the old days where you had to turn up/down your radio volume with a knob).

For my tests, I'm looking to normalize the 256 taps to a value between 0 and 100 (percent). I've been looking online to sort this out and I'm sure I've got it, but there are so many answers, I thought I'd reach out here to ensure things appear correct:

```use warnings;
use strict;

use feature 'say';

my (\$min, \$max) = (0, 255);
my (\$new_min, \$new_max) = (0, 100);

for my \$tap (\$min .. \$max){
my \$x = ((\$tap - \$min) * (\$new_max - \$new_min) / (\$max - \$min));
say "\$tap: \$x";
}

Output:

```0: 0
1: 0.392156862745098
2: 0.784313725490196
3: 1.17647058823529

...

127: 49.8039215686275
128: 50.1960784313725
129: 50.5882352941176

...

252: 98.8235294117647
253: 99.2156862745098
254: 99.6078431372549
255: 100

It appears perfectly good to me for what I need it for, but just would like some reassurance, so that if I use this calculation in the future, I won't be wondering what may be wrong.

When I'm testing variable outputs from such Integrated Circuits (pots, digital to analog converters etc), I want to set up a normalized number range and test each point (within a 1-2% boundary) instead of jumping chunks using AoAs for the data ranges to test against, like I do here.

Do my calculations within the code resonate well?

Replies are listed 'Best First'.
Re: Normalizing a range of numbers to a percentage
by tybalt89 (Prior) on Mar 07, 2019 at 23:15 UTC
```my \$x = ((\$tap - \$min) * (\$new_max - \$new_min) / (\$max - \$min)) + \$new
+_min;

I think of it this way:

```(\$tap - \$min)  / (\$max - \$min)
is the ratio on the old scale, then multiply by the new scale and add the new offset.

Thanks tybalt89!

I did happen to simplify this down a bit because of the zero aspect, but for completeness purposes, may I bug you to add in a couple of examples as to where my OP code *wouldn't* work?

Say if \$new_min was 5 for example?

With \$new_min = 5, it would give a range of 0..95 rather than 5..100 if you don't include the offset.

With -100 to +100 as the new min/max, it would give 0..200 rather than -100 to +100.

ugh, can't believe I missed that. I do that type of calculation so often, I should have noticed :-(

I do it so often, I finally wrote a simultaneous equation solver (with Math::BigRat) so I just plug in end points and it gives me the multiplier and constant offset :)

Re: Normalizing a range of numbers to a percentage
by choroba (Archbishop) on Mar 07, 2019 at 23:21 UTC
I remember knobs from an electric guitar that weren't linear. For such a potentiometer, you might need a different formula, but for linear ones, it seems correct.

map{substr\$_->[0],\$_->[1]||0,1}[\*||{},3],[[]],[ref qr-1,-,-1],[{}],[sub{}^*ARGV,3]

It depends. As far as i remember Gibson volume potis are linear and their tone potis are logarithmic. For Fender and Ibanez i‘m not sure. But i think they are all logarithmic. And i also think that there is no audio device with a linear volume poti. It makes no sense. Regards, Karl

«The Crux of the Biscuit is the Apostrophe»

perl -MCrypt::CBC -E 'say Crypt::CBC->new(-key=>'kgb',-cipher=>"Blowfish")->decrypt_hex(\$ENV{KARL});'Help

Re: Normalizing a range of numbers to a percentage
by BrowserUk (Patriarch) on Mar 08, 2019 at 11:22 UTC

Save runtime. Do the calculation once and then use a lookup:

```#! perl -slw
use strict;

my \$lookup = join '', map{ chr( \$_ / 255 * 100 ) } 0 .. 255;

print "\$_ :: ", ord( substr \$lookup, \$_, 1 ) for 0 .. 255;

__END__
C:\test>junk99
0 :: 0 1 :: 0 2 :: 0
3 :: 1 4 :: 1 5 :: 1
6 :: 2 7 :: 2
8 :: 3 9 :: 3 10 :: 3
11 :: 4 12 :: 4
13 :: 5 14 :: 5 15 :: 5
16 :: 6 17 :: 6
18 :: 7 19 :: 7 20 :: 7
21 :: 8 22 :: 8
23 :: 9 24 :: 9 25 :: 9
26 :: 10 27 :: 10 28 :: 10
29 :: 11 30 :: 11
31 :: 12 32 :: 12 33 :: 12
34 :: 13 35 :: 13
36 :: 14 37 :: 14 38 :: 14
39 :: 15 40 :: 15
41 :: 16 42 :: 16 43 :: 16
44 :: 17 45 :: 17
46 :: 18 47 :: 18 48 :: 18
49 :: 19 50 :: 19
51 :: 20 52 :: 20 53 :: 20
54 :: 21 55 :: 21 56 :: 21
57 :: 22 58 :: 22
59 :: 23 60 :: 23 61 :: 23
62 :: 24 63 :: 24
64 :: 25 65 :: 25 66 :: 25
67 :: 26 68 :: 26
69 :: 27 70 :: 27 71 :: 27
72 :: 28 73 :: 28
74 :: 29 75 :: 29 76 :: 29
77 :: 30 78 :: 30 79 :: 30
80 :: 31 81 :: 31
82 :: 32 83 :: 32 84 :: 32
85 :: 33 86 :: 33
87 :: 34 88 :: 34 89 :: 34
90 :: 35 91 :: 35
92 :: 36 93 :: 36 94 :: 36
95 :: 37 96 :: 37
97 :: 38 98 :: 38 99 :: 38
100 :: 39 101 :: 39
102 :: 40 103 :: 40 104 :: 40
105 :: 41 106 :: 41 107 :: 41
108 :: 42 109 :: 42
110 :: 43 111 :: 43 112 :: 43
113 :: 44 114 :: 44
115 :: 45 116 :: 45 117 :: 45
118 :: 46 119 :: 46
120 :: 47 121 :: 47 122 :: 47
123 :: 48 124 :: 48
125 :: 49 126 :: 49 127 :: 49
128 :: 50 129 :: 50 130 :: 50
131 :: 51 132 :: 51
133 :: 52 134 :: 52 135 :: 52
136 :: 53 137 :: 53
138 :: 54 139 :: 54 140 :: 54
141 :: 55 142 :: 55
143 :: 56 144 :: 56 145 :: 56
146 :: 57 147 :: 57
148 :: 58 149 :: 58 150 :: 58
151 :: 59 152 :: 59
153 :: 60 154 :: 60 155 :: 60
156 :: 61 157 :: 61 158 :: 61
159 :: 62 160 :: 62
161 :: 63 162 :: 63 163 :: 63
164 :: 64 165 :: 64
166 :: 65 167 :: 65 168 :: 65
169 :: 66 170 :: 66
171 :: 67 172 :: 67 173 :: 67
174 :: 68 175 :: 68
176 :: 69 177 :: 69 178 :: 69
179 :: 70 180 :: 70 181 :: 70
182 :: 71 183 :: 71
184 :: 72 185 :: 72 186 :: 72
187 :: 73 188 :: 73
189 :: 74 190 :: 74 191 :: 74
192 :: 75 193 :: 75
194 :: 76 195 :: 76 196 :: 76
197 :: 77 198 :: 77
199 :: 78 200 :: 78 201 :: 78
202 :: 79 203 :: 79
204 :: 80 205 :: 80 206 :: 80
207 :: 81 208 :: 81 209 :: 81
210 :: 82 211 :: 82
212 :: 83 213 :: 83 214 :: 83
215 :: 84 216 :: 84
217 :: 85 218 :: 85 219 :: 85
220 :: 86 221 :: 86
222 :: 87 223 :: 87 224 :: 87
225 :: 88 226 :: 88
227 :: 89 228 :: 89 229 :: 89
230 :: 90 231 :: 90 232 :: 90
233 :: 91 234 :: 91
235 :: 92 236 :: 92 237 :: 92
238 :: 93 239 :: 93
240 :: 94 241 :: 94 242 :: 94
243 :: 95 244 :: 95
245 :: 96 246 :: 96 247 :: 96
248 :: 97 249 :: 97
250 :: 98 251 :: 98 252 :: 98
253 :: 99 254 :: 99
255 :: 100

And if you'd prefer less zeros, or more 100s, then fudge it. (Add 0.5).

With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority". The enemy of (IT) success is complexity.
In the absence of evidence, opinion is indistinguishable from prejudice. Suck that fhit

I'm currently reading the book Higher Order Perl and I'm on the section where it covers memoization. The book says a single multiplication won't be improved on by memoization because of the overhead involved and because multiplication is already fast. I'm surprised that your lookup with ord() and substr() is faster than a single multiplication. The only difference I see is that my function is using a float constant and the ord() and substr() don't need to do float calculations.

```#!/usr/bin/env perl
use warnings;
use strict;
use Benchmark 'cmpthese';

my \$lookup = join '', map{ chr( \$_ / 255 * 100 ) } 0 .. 255;
my \$const = 100 / 255;

cmpthese(-2, {
lookup => sub {
my @output = map { ord( substr \$lookup, \$_, 1 )} 0 .. 255;
},
calc   => sub {
my @output = map {\$_ / 255 * 100} 0 .. 255;
},
calc2   => sub {
my @output = map {\$_ * \$const} 0 .. 255;
},
});

my @output1 = map { ord( substr \$lookup, \$_, 1 )} 0 .. 5;
my @output2 = map {\$_ * \$const} 0 .. 5;
print "@output1\n";
print "@output2\n";
__END__

# Results on my machine (v5.22.1 built for MSWin32-x64-multi-thread):
Rate   calc  calc2 lookup
calc   12009/s     --   -24%   -31%
calc2  15753/s    31%     --    -9%
lookup 17376/s    45%    10%     --
0 0 0 1 1 1
0 0.392156862745098 0.784313725490196 1.17647058823529 1.5686274509803
+9 1.96078431372549
the ord() and substr() don't need to do float calculations.

In this\$_ * \$const first the integer in \$_ is promoted to a double (to match the type of \$const), then the multiplication is done, and then (to make it useful for the OP though you aren't doing it here) the result needs to be converted (trunc'd) back to an integer. (If you added back that necessity, the difference would be more marked.)

Runtime memoization and look up using a hash (per the Memoize module) would be much slower because each input integer needs to be be converted to a string, then that string must be hashed, then taken modulo the hash size (which must be looked up, then that table entry inspected, and (potentially) a linear search of an array performed, before the value is found.

The lookup essential consists of a direct index and done.

For the OPs purpose, an array lookup would probably be even quicker:

```#! perl -slw
use strict;

my @lookup = map{ int( \$_ / 255 * 100 ) } 0 .. 255;

print "\$_ :: ", \$lookup[ \$_ ] for 0 .. 255;

With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority". The enemy of (IT) success is complexity.
In the absence of evidence, opinion is indistinguishable from prejudice. Suck that fhit
Re: Normalizing a range of numbers to a percentage
by hdb (Monsignor) on Mar 08, 2019 at 08:22 UTC

For such a simple translation it might be overkill but you could also use an interpolation module such as Math::Interpolate for the translation.

```use strict;
use warnings;
use Math::Interpolate qw(linear_interpolate);

print "\$_: ".linear_interpolate( \$_, [0,255], [0,100])."\n" for 0..255
+;
Re: Normalizing a range of numbers to a percentage
by pryrt (Monsignor) on Mar 07, 2019 at 23:08 UTC

That is the right math to map from 0-255 to 0-100, and has been made properly generic to handle different ranges (for example, if you sometime wanted to use a 16b instead of 8b)

Re: Normalizing a range of numbers to a percentage
by stevieb (Canon) on Mar 09, 2019 at 23:00 UTC

Thank you for all of the feedback here on this thread. It's been helpful in a true math sense, but also in a Perl/programming sense.

For what it's worth, yes, I'm clearly a bit detached from my basic algebra.

All of the input here has been extremely beneficial, and for that, I'm very appreciative.

-stevieb

Re: Normalizing a range of numbers to a percentage
by Anonymous Monk on Mar 08, 2019 at 13:43 UTC

Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: perlquestion [id://1231032]
Approved by choroba
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others wandering the Monastery: (1)
As of 2022-05-19 01:38 GMT
Sections?
Information?
Find Nodes?
Leftovers?
Voting Booth?
Do you prefer to work remotely?

Results (71 votes). Check out past polls.

Notices?