NEW: Learning electronics? Ask your questions on the new Electronics Questions & Answers site hosted by CircuitLab.
Microcontroller Programming » Using concise fractions to save time and memory in microprocessor applications
August 31, 2010 by bretm |
This isn’t a question, it’s more along the lines of a tutorial. Floating-point support takes up a lot of valuable space on the microcontroller and floating-point routines are slow. It’s common to optimize size and speed of microcontroller apps by using fixed-point approximations to real values. For example, you might represent numbers by integers that are 10000 times the actual value. In this scheme, pi would be represented by the integer value 31416. Multiplication could be done just by multiplying the integers and then dividing by 10000, e.g. sqrt(2) times pi would be 14142 x 31416 / 10000 = 444285072 / 10000 = 44429 (rounded), which represents 4.4429. This is a great performance and memory trade-off if you don’t need the 7 digits of precision that the float type gives you. But fixed-point representations, which use a fixed denominator such as 10000 for human readability or 65536 for performance reasons, aren’t necessarily the most accurate option. Consider the fraction 239/169 as an approximation for sqrt(2). This representation has a value of 1.41420118… which has almost 5 decimal places of accuracy when compared to the actual value 1.41421356…., and it still only requires two bytes to represent it. Multiplying an integer (which itself might be a fixed-point representation of another number) by sqrt(2) can be done by multiplying by 239 and then dividing by 169. This is more accurate and takes fewer clock cycles than multiplying by 14142 and dividing by 10000. But how do you choose the best fraction to represent a given real number? Many a programmer’s first instinct would be to use as much precision as possible in the numerator or denominator by choosing the largest value in the available range. For example if you want to limit your numbers to 256 you might think that choosing 256/x would be the most accurate approximation for sqrt(2). In this case you would end up with the approximation 256/181, which is accurate to not-quite 4 decimal places. (It’s not as accurate, but it’s faster because multiplying by 256 can be represented by an implicit byte-shift.) Another approach would be to try every possible numerator in a given range and choose the result that gives you the most accurate answer. In this case if you tried 256, then 255, then 254, etc., you’d eventually find 239/169, but then you’d have to try a lot of other ratios before you could be sure that there wasn’t a more accurate one. Here’s an algorithm you can use to find the "best" approximations to a given real value. In this case "best" means the most accurate for a denominator of a given magnitude. You can do this by hand, but a spreadsheet makes it a lot easier. Start by setting up four columns, starting with the following setup:
Your number goes in cell A3. In this case I chose the natural logarithm of 10. The numbers 0, 1, 1, 0 go into the top of columns C and D as shown. Next, take the whole part of your number and put in in column B.
Now, multiply the C and D values from the previous row by the new value in column B, and then add the C and D values from two rows up. In this case we have 2 times (1,0) equals (2,0), and then add (0,1) to get (2,1).
On the next row, calculate 1.0 divided by the fractional portion of the previous real number. In this case we want 1.0 / 0.302585093 which is 3.304855471, and put it on the next row:
Now repeat the previous steps for columns B, C, and D. Put the integer part of the new number in B. Multiply the previous C and D by the new B and add the C and D from two rows ago:
Keep doing this a few more times, and eventually you’ll get better and better fractional approximations in columns C and D. After a few more rows you see the approximation 175/76 which has more than 4 decimal places of accuracy. Add a few more rows and you’ll get increasingly accurate answers at the cost of higher precision.
I chose logE(10) instead of sqrt(2) for this example because square roots have an interesting property that would have made it a confusing example. When you apply this algorithm to them, the numbers in column B eventually start to repeat. For exact rational numbers they go to zero. For many irrational numbers they continue unpredictably forever. For some irrational numbers they don’t repeat but they are completely predictable. Here’s how the table would look for sqrt(2):
Column A and B stay at 1+sqrt(2) and 2 forever. And there’s the 239/169 I used earlier. Here’s the table for “e”, the base of the natural logarithm:
The fraction 193/71 is accurate to almost 5 decimal places. The values in column B continue on: 6, 1, 1, 8, 1, 1, 10, 1, 1, 12, 1, 1, ... See the pattern? Cutting the list off just before a large number in column B provides a particularly excellent approximation for a given precision. For example, here’s the table for pi:
If we stop at 355/133 we get a number that is accurate to almost 7 decimal places! You won’t find a better approximation until you get into 6-digit numerators. Including the next row gives us a fraction that is accurate to one part in 5 billion, but at the cost of more storage and computation time if we were to use those constants in a program. The series of numbers in column B is called the continued-fraction representation of the number. For pi, it is written as [3; 7, 15, 1, 292, ...]. For e it is [2; 1, 2, 1, 1, 4, 1, 1, 6, ...]. It is short-hand for a "continued" fraction in the form a+1/(b+1/(c+1/(d+…. For example, [3; 7, 16] is equal to 3 + 1 / (7 + 1 / 16), which is also 355/113. (I used a trick there—if the last number in a list is 1, e.g. 3; 7, 15, 1, then you can discard the 1 and add it to the previous number.) The series of ratios 3/1, 22/7, 333/106, 355/113 are called the convergents of the continued fraction, and are guaranteed to be better approximations than any fractions with smaller denominators. For more information, see Continued Fraction on Wikipedia. |
---|---|
August 31, 2010 by wayward |
Wow! Great write-up, bretm. I was actually curious myself how these fractions were calculated, but had always been to lazy to search for the answer myself. Thanks! Here's also a somewhat related, iterative algorithm to approximate the square root of a real number _N_.
It may not be the fastest way, but it's simple and handy to know. Also see Wikipedia for more numerical methods of calculating square root and related functions. |
August 31, 2010 by Ralphxyz |
Excellant, it will take a bit to digest but thank you. Ralph |
August 31, 2010 by bretm |
That square root calculation is a special case of Newton's method for solving roots of equations, and it's a popular technique in many math libraries because it doubles the number of digits of precision you get every time. For example, if you're calculating the square root of 2 and your current guess has the first 5 digits right, e.g. 1.4142, performing a single iteration gives you 10 good digits and two iterations gives you 20 good digits. In this case you get 1.414213562(43) after the next iteration and the real answer is 1.414213562(37). |
September 01, 2010 by Ralphxyz |
I hope this leads to more conversation and learning, but apologies beforehand for some might find this post offensive. 75 - 80% of everything I know about electronics and computers come from online forums, I have had a rather large library of computer books (I have spent $400.00 a month at Amazon, often) but most of the book reading never really registered because I had very little actual practical application need or some where in the "sample" code something didn't work so that ended that quest because I could not get help from the author or discussion groups. So I have read a lot of books on programming (from GeeWiz Basic, VB, Logo, C, C++, C#, JAVA, Prolog to APL and probable a few more, now I am getting ready for Assembler). All of them have emphasized the features and shortcomings of their number handling capabilities especially it's precision. Now, maybe it is the Luddite in me but I have always asked myself after a discourse on number accuracy to the most infinitesimal point, WHY? Why is this precision needed or why is the lack of precision beyond a hundredth ($0.00 I do deal in money) needed. Now please bear with me on this and remember I did not have a teacher with whom I could search out and explore the wonders of mathematics and to be inspired by to accomplish things in life, or really anyone to just talk with. I have dealt a lot with warehousing and delivery logistics where we warehoused items, rarely fractional parts. We delivered packages, if we delivered a partial package we were in trouble. Unless I think about it I really do not know of dealing with anything in the real world that required greater than a thousandth degree of accuracy. Well I did work for a micro-precision bearing company building Redeye Missiles guidance gyros and they probable had a very critical accuracy requirement, but the chief machinist used to crank out the body of these gyros on this little bench lath so I wonder just how accurate they were. Now that I am learning electronics I can see where in circuit design if using a 33pf capacitor I might need some precision in my circuit analysis, or then again I could just try it and see if it works. To me in my ignorant ways I "sometimes" equate all this concern with precision of numbers with the lesson most of us have learned from programming "just because you can do something does not mean that you should". I mean if someone had not taught you that precision of numbers was important, would it some how have manifested itself to you as being important, or are you just a reflection of what you have been taught? You were taught this so you promulgate it for the next person to carry on or do you actually use this precision. And if so is the precision really needed. I suppose if you are a rocket scientist launching a mission to Mars you might need precision but could some of that process be satisfied with "close enough", I know getting there is critical but how many calculations are involved in going up and making sure you were in motion? Again I apologize for the baseness of this post but I think "why" is a legitimate question. Especially from someone such as myself who has no formal learning. Please don't rant, well a few snide comments might be appropriate. I really do not know why this precision of numbers or what it means to me in a practical sense mean to my life. bretm, once again thank you for the excellant, provoking tutorial, I really love to see this kind of information in the forum, and I hope there will be lots of follow up conversations and hopefully more such articles. Ralph |
September 01, 2010 by bretm |
"Why" is a perfectly valid question. The answer in this case, and it's not meant to be sarcastic, is "why not?" This issue arose, for me, when looking for space-saving solutions to get a program a few percent smaller to fit onto the microcontroller. The program used fixed-point math in some cases. It one point there was a multiplication followed by a division by 10000, which represented multiplication by a fixed point constant. There were a number of these constants in the program. I already knew an easy technique to improve such approximations, so why not use it? Why not save time and space by multiplying by a byte value and then dividing by a byte value instead of using two-byte values, when you get the same or better accuracy, for free? That was the main point that was admittedly buried in a rather long post. The question remains, if you don't need the extra space or time, why bother with such optimization. I would say if the code is already written, and it works, and doesn't need to be optimized, then don't optimize it. There's not a good reason. But if you have this technique in your set of tools and are writing new code, I would again say, why not? |
September 01, 2010 by wayward |
Well, we are dealing with miniature computers here, so our programs will likewise have to be minimal. Which generally implies that our knowledge of miniaturization needs to maximize. :) |
September 01, 2010 by Ralphxyz |
Excellant, learning more on every reply. Thank you. For me a discussion in a forum is such a better way of learning than reading it in a book. The "Why Not" makes sense and wayward's point of miniaturization makes a lot of sense in application. Ralph |
September 21, 2010 by jbrad |
Here is a python script I wrote to implement the approximation bretm described above,.... why? ........ why not!
|
September 21, 2010 by bretm |
That's great. I've just started learning Python so that was a good program to experiment with. I was investigating using fractions instead of floating point and ran across this (with parentheses for Python 3.0's print):
The ratios aren't always continued-fraction convergents so it's a different algorithm. It can end up with results that are in-between two steps of the continued-fraction algorithm. Technically that increases the ratio of the error to the number of bits in the fractional representation, but only when measuring fractional bits and so isn't a practical issue. So in Python 2.6 or later I'd use that. |
Please log in to post a reply.
Did you know that NerdKits make a great parent/child activity? Learn more...
|