Anyone who has ever had a wristwatch of similar tech should know how hard it is to get anything like precision out of those things. It's a millimeter sized button with a millimeter depth of press and could easily need half a second of jabbing at it to get it to trigger. It's for measuring your mile times in minutes, not fractions of a second fall times.
Naturally, our data was total, utter crap. Any sensible analysis would have error bars that, if you treat the problem linearly, would have put 0 and negative numbers within our error bars. I dutifully crunched the numbers and determined that the gravitational constant was something like 6.8m/s^2 and turned it in.
Naturally, I got a failing grade, because that's not particularly close, and no matter how many times you are solemnly assured otherwise, you are never graded on whether you did your best and honestly report what you observe. From grade school on, you are graded on whether or not the grading authority likes the results you got. You might hope that there comes some point in your career where that stops being the case, but as near as I can tell, it literally never does. Right on up to professorships, this is how science really works.
The lesson is taught early and often. It often sort of baffles me when other people are baffled at how often this happens in science, because it more-or-less always happens. Science proceeds despite this, not because of it.
(But jerf, my teacher... Yes, you had a wonderful teacher who didn't only give you an A for the equivalent but called you out in class for your honesty and I dunno, flunked everyone who claimed they got the supposed "correct" answer to three significant digits because that was impossible. There are a few shining lights in the field and I would never dream of denying that. Now tell me how that idealism worked for you going forward the next several years.)
The closing sentence is also prescient; the author pivoted to CS, ultimately completing his doctorate at the University of Wisconsin at Madison
https://pages.cs.wisc.edu/~kovar/
For all its flaws, Fahrenheit was based on some good ideas and firmly grounded in what you could easily measure in the 1720s. A brine solution and body heat are two things you can measure without risking burning or freezing the observer. Even the gradations were intentional: in the original scale, the reference temperatures mapped to 32 and 96, and since those are 64 units apart, you could mark the rest of the thermometer with a bit of string and some halving geometry. Marking a Celsius scale from 0 to 100 accurately? Hope you have a good pair of calipers to divide a range into five evenly-spaced divisions...
Nowadays, we have machines capable of doing proper calibration of such mundane temperature ranges to far higher accuracy than the needle or alcohol-mix can even show, but back then, when scientists had to craft their own thermometers? Ease of manufacture mattered a lot.
I could never reproduce it well in the lab, because it's really not true. Take a heavy cube the shape of a book. Orient it so that the spine is on the floor. It's a lot more friction to move it in one direction than in the transverse direction. Yet the normal force is the same. Any kid knows this, and I feel dumb it never occurred to me till someone pointed it out to me.
404 error now, perhaps some admins took it down due to traffic?
The user's directory is still linked from the listing [0] though.
[0] https://pages.cs.wisc.edu/
As an odd coincidence, I did the same experiment on a shoestring budget with substandard equipment also. I too used a fancy computer algorithm to get a best fit. Except that I managed to get four significant decimal places in the result — an improvement over the (also outdated) textbook.
The author of the angry rant had a life-defining experience of overwhelming frustration.
The same scenario resulted in a positive life-defining experience for me
It’s funny how unpredictably things pan out even in identical circumstances…