A Human’s Guide to Machine Intelligence – Kartik Hosanagar
Thoughts: I wrote up handwritten notes on this one back in 2019; now that I’m getting around to digitizing them in 2022, I remember very little about the book. Didn’t make a big impact on me.
(The notes below are not a summary of the book, but rather raw notes - whatever I thought, at the time, might be worth remembering.)
Hosanagar, Kartik. 2019. A Human’s Guide to Machine Intelligence. Viking.
- 45: unanticipated consequences: proposed by Adam Smith; popularized by Robert Merton. 3 kinds:
- unforeseen benefits
- perverse results - cause an opposite effect on the metric you’re trying to change
- unexpected drawbacks - cause a negative effect on a metric you’re not trying to change
- 74: Book mentioned: Chris Anderson’s The Long Tail. Suggests that automated recommendations can help people find relevant niche products/information
- 79: content-based (as opposed to context-based) recommendation systems are better at surfacing relevant gems that may otherwise go undiscovered.
- 89: book mentioned: Marvin Minsky & Seymour Pappert (1969) Perceptrons - outlined limitation of neural networks
- 102: to look up: Samuel Arbesman’s Overcomplicated.
- 135: Music recommendation-algorithms leads to increased overlap in music consumption. Due to:
- Listeners listen more when exposed to the algorithm’s suggestions
- Algorithm helped people discover new interests
- conclusion: at least sometimes, algorithms can lead to less fragmentation; diminish echo chamber effects
- 140: algorithm behaviour can differ in different contexts. In one environment an algorithm might mitigate echo-chamber effects, while, in another, the same algorithm could exacerbate the problem.
- 156: Humans much more readily lose faith in an algorithm that makes mistakes than in a human that makes the same mistakes
- 176: humans tend to trust algorithms when they have some control over them, even if this control is minimal
- 192: in human discourse, there is such thing as the “right amount” of transparency - more than secretive, less than TMI. Hosanagar suggests a similar principle should be followed in cultivating trust in algorithms.
- 194: three categories of explanations in justifying a decision: how, why, trade-off. Each is most appropriate in certain situations:
- How: to allay weakened confidence belief
- Why: to allay weakened benevolence belief (self-interest)
- Trade-off: to allay weakened integrity belief (fairness, honesty)
Posted: Nov 11, 2022. Last updated: Nov 11, 2022.