Tuesday, October 13, 2020

PNSQC 2020 Live Blog: Testing Floating-Point Applications with Alan Jorgensen




OK, now I'm getting mildly anxious (LOL!). Hearing about floating-point errors being disastrous always makes me nervous because I frequently look at calculations and think to myself self "how precise do we need to actually be?"Also, wanted to give credit to this talk being presented by Connie Masters.




I've always operated on the goal that anything beyond tens of thousands is not that critical (heck as a little kid I learned Pi as 3.1416 because anything more precise was just not relevant (thanks, Grandpa ;) ). However, I know of course that in certain applications (chemistry and astronomy) there are of course considerably more points of accuracy with significant digits.  having to rethink this now, as I feel with digital systems and with huge sample sizes, rounding errors that are insignificant on their own can stack up and be a real issue. Also, at what point do we go from insignificant to catastrophic?

This is the first time I've heard the phrase "logarithms are the way to ad apples and oranges" and I'm not 100% sure I get the implication but it's a definitely memorable phrase. the only thing I am feeling for sure is the fact that I feel like what I've understood as discrete mathematics all these years is lacking... also, I don't quite know how I feel about that.                              The net result of all of this is that I need to get comfortable with bounded floating-point and get some practice with it.





No comments: