67 pages • 2 hours read
A modern alternative to SparkNotes and CliffsNotes, SuperSummary offers high-quality Study Guides with detailed chapter summaries and analysis of major themes, characters, and more.
Summary
Background
Chapter Summaries & Analyses
Key Figures
Themes
Index of Terms
Important Quotes
Essay Topics
Tools
Chapter 1 presents some of the preceding models upon which current AI systems are built—such as the neural network perceptron, the AlexNet network, image processing models, machine learning systems, and word-embedding models—together with the ethical concerns they bring.
In 1958, Frank Rosenblatt introduced the “perceptron” during a demonstration organized by the Office of Naval Research in Washington, D.C. This device could learn from its mistakes through adjustments after each error. The perceptron, a basic neural network, determined the position of colored squares on flashcards based solely on the binary data received from its camera. Rosenblatt’s presentation showcased the perceptron’s potential to learn and adapt through experience, described as a “self-induced change in the wiring diagram” (18). This idea pointed to the gap in understanding neural networks at the time the perceptron was introduced. The challenge of achieving “suitably connected” networks, a concept envisioned by earlier theorists like McCulloch and Pitts but not fully realized in practical applications, was eventually carried out by Rosenblatt by including “a model architecture” (18), which contains adjustable parameters that can be modified by use of an “optimization algorithm or training algorithm” (18).
Rosenblatt’s demonstration presented the perceptron as a foundational model for future machine learning systems, emphasizing its ability to form connections and make deductions from basic binary inputs, just like the human brain does.
Plus, gain access to 8,650+ more expert-written Study Guides.
Including features:
By Brian Christian