에릭 랄슨 2022 "인공 지능의 신화 : 컴퓨터가 우리가 하는 방식으로 생각할 수 없는 이유"

(에릭 랄슨 2022)

Futurists are certain that humanlike AI is on the horizon, but in fact engineers have no idea how to program human reasoning. AI reasons from sta... The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do

목차

1부 : 단순화된 세계 Part I: The Simplified World

  1. The Intelligence Error

  2. Turing at Bletchley

  3. The Superintelligence Error

  4. The Singularity, Then and Now

  5. Natural Language Understanding

  6. AI as Technological Kitsch

  7. Simplifications and Mysteries

  8. 지능 오류

  9. 블레츨리의 튜링

  10. 초지능 오류

  11. 특이점, 과거와 현재

  12. 자연어 이해

  13. 기술적 키치로서의 AI

  14. 단순화와 신비

2부 : 추론의 문제 Part II: The Problem of Inference

  • Don’t Calculate, Analyze
  • The Puzzle of Peirce (and Peirce’s Puzzle)
  • Problems with Deduction and Induction
  • Machine Learning and Big Data
  • Abductive Inference
  • Inference and Language I
  • 계산하지 말고 분석하라
  • 퍼스의 퍼즐(그리고 퍼스의 퍼즐)
  • 연역과 귀납의 문제
  • 머신러닝과 빅데이터
  • 귀납적 추론
  • 추론과 언어Ⅰ
  • 추론과 언어 2

3부 : 신화의 미래 Part III: The Future of the Myth

  • 신화와 영웅
  • AI 신화가 신경과학을 침범하다
  • 인간 지능의 신피질 이론
  • 과학의 종말?
  • Myths and Heroes
  • AI Mythology Invades Neuroscience
  • Neocortical Theories of Human Intelligence
  • The End of Science?

랄슨 책 용어

[2024-11-03 Sun 12:26]

용어 듀얼 번역 필요

INDEX abduction, 160–168; Bayesian, 293–294n13; creative, 187–189; in fast thinking, 185, 283n1; as inference, 4; Peirce on, 25–26, 94, 171–172 abductive inference, 99–102, 162–163, 189, 190 abductive logic programming (ALP), 168, 176, 283n1 accuracy, saturation problem in, 155–156 AdaptWatson (Jeopardy! playing system), 223–225 affirming the consequent, 170 airplane crashes, 113–114 Alexander, Hugh, 40, 284n4 AlexNet (computer program), 165 alignment problem, 279 AlphaGo (computer program), 125, 161–162 altruism, 82 Amazon (firm), 144 anaphora, 210 Anderson, Chris, 145, 243 anomaly detection, 150–151 anti-intellectual policies, 270, 271 Arendt, Hannah, 64–66 Aristotle, 106, 107 artificial general intelligence (AGI), 38, 273; inadequate attempts at, 218; induction not strategy for, 173; Kurzweil on, 79; lack of investment in, 272; mythology of, 275; predictions of, 71–72 artificial intelligence (AI): big data in, 145; of chess-playing computers, 219; Dartmouth Conference on, 50–51; Data Brain projects and, 251–252; deduction-based, 110, 115; history of term, 285n11; human intelligence versus, 1–2; inductive, 273–274; in Jeopardy! playing computer (Watson), 221–226; Kurzweil’s prediction for, 47–48; limits to, 278–279; logic used in, 106–107; narrowness trap in, 226–231; natural language understanding of, 51–55; Peirce on, 233; predicting, 69–73; Turing on, 23; Winograd schemas test of, 195–203 Babbage, Charles, 232–233 Bacon, Francis, 288n4 Bangs, George, 159 Bar-Hillel, Yehoshua, 53–54, 57, 201–202 Barker, Adam, 143 Bayesian inference, 293–294n13 behaviorism (operant conditioning), 69 Bengio, Yoshua, 70, 72, 75, 127 Benkler, Yochai, 241–243 Berlin, Isaiah, 69 Berners-Lee, Tim, 179 biases (in machine learning systems), 28–29 big data, 55–56, 142–146, 243; in AI, 269; in neuroscience, 249, 251–257, 261–262; to solve Winograd schema questions, 200; in theories of brain, 266–267 Big Science, 267 Bombes (proto-computers), 21–22, 24 Bostrom, Nick, 2, 34–35, 57, 81 Bowman, David, 260–261 brain-mapping problem, 249–250 Brain Research through Advancing Innovative Neurotechnologies (BRAIN) initiative, 248, 249, 253, 256, 268 Brent, Joseph, 95, 289n5 Brin, Sergey, 56 brittleness problem, 126–129, 165 Bush, George H. W., 248 Byron, Lord, 238 Capek, Karel, 82–83 causation: correlation and, 259; Hume on, 120; ladder of, 130–131, 174; relevance problems in, 112 chess: Deep Blue for, 219; played by computers, 284n1; Turing’s interest in, 19–20 Chollet, François, 27 Chomsky, Noam, 52, 95 classification, in supervised learning, 134 cognition, Legos theory of, 266 color, 79, 289n16 common sense, 2, 131–132, 177; scripts approach to, 181–182; Winograd schemas test of, 196–203 computational knowledge, 178–182 computers: chess played by, 19–20, 284n1; earliest, 232–233; in history of technology, 44; machine learning by, 133; translation by, 52–55; as Turing machines, 16, 17; Turing’s paper on, 10–11 Comte, August, 63–66 Condorcet (Marie Jean Antoine Nicolas Caritat, the Marquis de Condorcet), 288n4 conjectural inference, 163 consciousness, 77–80, 277 conversations, Grice’s Maxims for, 215–216 Copernicus, Nicolaus, 104 counterfactuals, 174 creative abduction, 187–189 Cukier, Kenneth, 143, 144, 257 Czechoslovakia, 60–61 Dartmouth Conference (Dartmouth Summer Research Project on Artificial Intelligence; 1956), 50–51 data: big data, 142–146; observations turned into, 291n12 Data Brain projects, 251–254, 261, 266, 268, 269 data science, 144 Davis, Ernest, 131, 183; on brittleness problem, 126; on correlation and causation, 259; on DeepMind, 127, 161–162; on Google Duplex, 227; on limitations of AI, 75–76; on machine reading comprehension, 195; on Talk to Books, 228 deduction, 106–110, 171–172; extensions to, 167, 175; knowledge missing from, 110–112; relevance in, 112–115 deductive inference, 189 Deep Blue (chess computer), 219 deep learning, 125, 127, 134, 135; as dead end, 275; fooling systems for, 165–166; not used by Watson, 231 DeepMind (computer program), 127, 141, 161–162 DeepQA (Jeopardy! computer), 222–224 deep reinforcement learning, 125, 127 Dostoevsky, Fyodor, 64 Dreyfus, Hubert, 48, 74 earthquake prediction, 260–261 Eco, Umberto, 186 Edison, Thomas, 45 Einstein, Albert, 239, 276 ELIZA (computer program), 58–59, 192–193, 229 email, filtering spam in, 134–135 empirical constraint, 146–149, 173 Enigma (code making machine), 21, 23–24 entity recognition, 137 Etzioni, Oren, 129, 143–144 Eugene Goostman (computer program), 191–195, 214–216 evolutionary technology, 41–42 Ex Machina (film, Garland), 61, 78–80, 82, 84, 277 Facebook, 147, 229, 243 facts, data turned into, 291n12 Farecast (firm), 143–144 feature extraction, 146–147 Ferrucci, Dave, 222, 226 filter bubbles, 151 financial markets, 124 Fisch, Max H., 96–97 Fodor, Jerry, 53 formal systems, 284n6 Frankenstein (fictional character), 238 Frankenstein: Or, a Modern Prometheus (novel, Shelly), 238, 280 frequency assumptions, 150–154, 173 Fully Automated High-Quality Machine Translation, 48 functions, 139 Galileo, 160 gambler’s fallacy, 122 games, 125–126 Gardner, Dan, 69–70 Garland, Alex, 79, 80, 289n16 Gates, Bill, 75 general intelligence, 2, 31, 36; abduction in, 4; in machines, 38; nonexistance of, 27; possible theory of, 271 General Problem Solver (AI program), 51 Germany: Enigma machine of, 23–24; during World War II, 20–21 Go (game), 125, 131, 161–162 Gödel, Kurt, 11, 22, 239; incompleteness theorems of, 12–15; Turing on, 16–18 Golden, Rebecca, 250 Good, I. J. “Jack,” 3, 19; on computers, 46; on intelligence, 33–35, 37, 43, 62; Von Neumann on, 36 Google (firm), 220, 244 Google Brain (computer program), 296n4 Google Duplex, 227 Google Photos, 278–279 Google Talk to Books, 228 Google Translate (computer program), 56, 201, 202 Goostman, Eugene (computer program), 191–195, 214–216 gravimetrics, 157 gravity, 187 Great Britain, code breaking during World War II by, 20–24 Grice, Paul, 215 Grice’s Maxims, 215–216 guesses, 160, 183–184 Haugeland, John, 179, 294n17 Hawking, Stephen, 75 Hawkins, Jeff, 263, 264 Heisenberg, Werner, 72 hierarchical hidden Markov models, 265–266 Higgs, Peter, 254–255 Higgs boson, 254–255, 257–258 Hilbert, David, 14–16 Hill, Sean, 245, 246, 248 Hinton, Geoff, 75 hive mind: online collaboration as, 241–242; origins in Star Trek of, 240; science collaborations as, 245 Homo faber (man the builder), 65, 66 Horgan, John, 275–278 Hottois, Gilbert, 287n1 Human Brain Project, 245, 247–254, 256, 267–268 Human Genome Project, 252 human intelligence: artificial intelligence versus, 1–2; behaviorism on, 69; Data Brain projects and, 251; Good on, 33–35; infinite amount of knowledge in, 54; neocortical theories of, 263–268; as problem solving, 23; singularity as merging with machine intelligence, 47–48; social intelligence, 26–28; Stuart Russell’s definition of, 77, 83; thinking in, 184–186; Turing on, 23, 27–31, 30–32 human language. See natural language human nature, 242, 243 humans: Arendt on, 64–65; Skinner on, 68 Hume, David, 119–122, 125 humorous news stories, 152–154 hunters, 164 hypotheses, 186 IBM: Deep Blue by, 219; Jeopardy! computer (Watson) by, 220–226, 230–231 ImageNet competitions, 135, 145, 155, 165, 243 image recognition, 278–279 imitation game, 9, 51 The Imitation Game (film), 21 incompleteness theorems, 12–15 induction, 115–121, 171–172; abduction and, 161; in artificial intelligence, 273–274; in life situations, 125–126; limits to, 278–279; machine learning as, 133; not strategy for artificial general intelligence, 173; problems of, 122–124; regularity in, 126–129 inductive inference, 189 inference, 4, 104, 280–281, 283n1; abductive inference, 99–102, 162–163; in artificial intelligence, 103; combining types of, 218–219, 231; guesses as, 160; in history of science, 103–105; in knowledge bases, 181, 182; monotonic, 167; non-monotonic, 167–168; as trust, 129–130; types of, 171 inference engines, 182–184 information technology, 249, 252 ingenuity: Gödel on, 12; Turing on, 11, 17, 18 innovation: decline in, 269; funding in control over, 270 insight, 103 instincts, 184 intelligence. See human intelligence intelligence explosions, 37–41 internet, Web 2.0, 240–242 intractability, 175 intuition: in code-breaking machines, 22; Gödel on, 12; human intelligence versus, 27; Turing on, 11, 16–18, 31–32; weight of evidence in, 24–25 James, Henry, 289–290n5 James, William, 96–98 Japan, 54–55 Jennings, Ken, 220 Jeopardy! (television quiz show), 220–226, 230–231 Jevons, William, 232 Jevons Logical Piano, 232 Jordan, Michael, 258 Kahneman, Daniel, 38, 184–185 Kandel, Eric R., 253, 255–256, 268 Kasparov, Garry, 219 Kavli Foundation, 249, 255 Keilis-Borok, Vladimir, 260 Kelly, Kevin, 42, 241–242, 285–286n7 Kenall, Amye, 249 Kepler, Johannes, 104 kitch, 61–63 knowledge: computational, 178–182; deductive reasoning and, 110–112; infinite amount of, 54; as problem for induction, 124; used in inference, 102 knowledge-based systems, 107 knowledge bases, 178–181 knowledge representation and reasoning (KR&R), 175–176 Koch, Christof, 253, 256 Kundera, Milan, 60–61 Kurzweil, Ray, 35, 38, 84, 274; on consciousness, 78–79; on future of AI, 70, 74; hierarchical pattern recognition theory of, 264–266; on human intelligence, 251; Law of Accelerating Returns of, 42, 47–48, 67; on singularity, 46; on superintelligent machines, 2; on Turing test, 193–194 ladder of causation, 130, 174 Lakatos, Imre, 48 Laney, Doug, 292n5 language. See natural language Lanier, Jaron, 84, 244, 277; on encouraging human intelligence, 239; on erosion of personhood, 270, 272–273 Large Hadron Collider (LHC), 254–255 Law of Accelerating Returns (LOAR), 42, 47–48 learning: definition of, 133; by humans, 141 LeCun, Yann, 75 Legos theory of cognition, 266 Lenat, Doug, 74 Levesque, Hector, 76, 216; on attempts at artificial general intelligence, 175, 186; on Goostman, 192; pronoun disambiguation problem of, 203; on Winograd schemas, 195–196, 198–201 Loebner Prize, 59 Logic Theorist (AI program), 51, 110 Lord of the Rings (novels; Tolkien), 229–230 Lovelace, Ada, 233 machine learning: definition of, 133; empirical constraint in, 146–149; frequency assumption in, 150–154; model saturation in, 155–156; as narrow AI, 141–142; as simulation, 138–140; supervised learning in, 137 machine learning systems, 28–30 MacIntyre, Alasdair, 70–71 Marcus, Gary, 131, 183; on brittleness problem, 126; on correlation and causation, 259; on DeepMind, 127, 161–162; on Google Duplex, 227; on Goostman, 192; on Kurzweil’s pattern recognition theory, 265; on limitations of AI, 75–76; on machine reading comprehension, 195; on superintelligent computers, 81; on Talk to Books, 228 Markram, Henry, 252–254, 273; on AI, 251; on big data versus theory, 256–258, 267, 268; on hive mind, 245–246, 276; Human Brain Project under, 247–250; Legos theory of cognition by, 266; on theory in neuroscience, 261, 262 Marquand, Alan, 232 mathematics: functions in, 139; Gödel’s incompleteness theorems in, 12–14; Hilbert’s challenge in, 14–16; Turing on intuition and ingenuity in, 11 Mathews, Paul M., 256, 267 Mayer-Schönberger, Viktor, 143, 144, 257 McCarthy, John, 50, 107, 285n11 Microsoft Tay (chatbot), 229 Mill, John Stuart, 242, 243 Miller, George, 50 minimax technique, 284n1 Minsky, Marvin, 50, 52, 222 Mitchell, Melanie, 165 Mitchell, Tom, 133 model saturation, 155–156 modus ponens, 108–109, 168–169 monologues, Turing test variation using, 194–195, 212–214 monotonic inference, 167 Mountcastle, Vernon, 264 Mumford, Lewis, 95, 98 “The Murders in the Rue Morgue” (short story, Poe), 89–94 Musk, Elon, 1, 75, 97 narrowness, 226–231 Nash, John, 50 National Resource Council (NRC), 53, 54 natural language: AI understanding to, 228–229; computers’ understanding of, 48, 51–55; context of, 204; continued problems with translation of, 56–57; in speech-driven virtual assistance applications, 227; Turing test of, 50, 194; understanding and meaning of, 205–214; Winograd schemas test of, 195–203 neocortical theories: Hawkins’s, 263; Kurzweil’s, 264–266 neural networks, 75 neuroscience, 246; collaboration in, 245–247; Data Brain projects in, 251–254; Human Brain Project in, 247–251; neocortical theories in, 263–268; theory versus big data in, 255–256, 261–262 Newell, Allan, 51, 110 news stories, 152–154 Newton, Isaac, 187, 276 Nietzsche, Friedrich Wilhelm, 63 no free lunch theorem, 29 noisy channel approach, 56 non-monotonic inference, 167–168 normality assumption, 150–151 Norvig, Peter, 77, 155, 156 nuclear weapons, 45 Numenta (firm), 263 observation: generalizing from, 117–118; in induction, 115; limitations of, 121; turning into data, 291n12 operant conditioning (behaviorism), 69 orthography, 205 overfitting (statistical), 258–261 Page, Larry, 56 Pearl, Judea, 130–131, 174, 291n13 Peirce, Charles Sanders, 95–99; on abduction, 25–26, 160–168; on abductive inference, 99–102, 190; on guessing, 94, 183–184; on “Logical Machines,” 232–233, 273; theft of watch from, 157–160, 289–290n5; on types of inference, 171–172, 181; on weight of evidence, 24 Peirce, Juliette, 98 Perin, Rodrigo, 266 PIQUANT (AI system), 221–224 Poe, Edgar Allan, 89–94, 99, 102 Polanyi, Michael, 73–74 Popper, Karl, 70–71, 122 positivism, 63 pragmatics (context for natural language), 204, 206, 214–215, 296n1 predictions, 69–73; big data used for, 143–144; induction in, 116, 124; limits to, 130 predictive neuroscience, 254 probabilistic inference, 102 programming languages for early computers, 284n2 Prometheus (mythical), 237–238 pronoun disambiguation problem, 203 propositional logic, 169–170 random sampling, 118 reading comprehension, 195 real-time inference, 101 reasoning, 176 religion, 63 resource description framework (RDF), 179 R.U.R. (play, Capek), 83 Russell, Bertrand, 110, 121–124, 173 Russell, Stuart, 42–43, 84; on human ingenuity, 69, 274; on intelligence, 76–78; on language and common sense, 131; on limits to supercomputers, 39; on logic in AI, 107; on problems in AI, 279; on superintelligent computers, 80–83; on Turing test, 193; on two-player games, 125 Rutherford, Ernest, 43 Salmon, Wesley, 112 sampling, 117–118 sarcasm, 151–152, 296n1 saturation problem, 155–156 Schank, Roger, 181–182 science, 63–64, 66–67; Bertrand Russell on, 122; big data versus theory in, 145, 243, 255–259; Big Science, 267; collaboration in, 245–247; completeness of, 275–277; Data Brain projects in, 251–254; guessing as inference in, 160; inference in history of, 103–105; megabuck influence in, 269–271 scripts, 181–182 selection problem, 182–184, 186–190 self-driving cars, 127, 278; saturation problem in, 155–156 self-reference, in mathematics, 13 semantic role labeling, 138–139 semantics, 206 Semantic Web, 179 semi-supervised learning, 133–134 sequential classification, 136–137 sequential learning, 136–137 sexual desire, 79 Shannon, Claude, 19, 50, 56 Shaw, Cliff, 110 Shelley, Mary, 50, 238, 280 Shelley, Percy, 238, 280 Sherlock Holmes (fictional character), 90, 121, 161, 291n13 Shirky, Clay, 241, 242 Silver, Nate, 259–261 Simon, Herbert, 50–52, 110 simulations, machine learning as, 138–140 Singularity, 45–46, 50; Kurzweil on, 47–48; origins of concept, 286n3 Skinner, B. F., 68, 69 social change, 44–45 social intelligence, 26–28 social networks, 220 soundness, 109 Soviet Union, 60–61 spam, classifying, 134–135 speech-driven virtual assistance applications, 227 supercomputers, 249, 250, 276–277 superintelligence, 39; Bostrom on, 34, 81; Kelly on, 285–286n7; Kurzweil’s prediction for, 47; Russell on, 80–82 supervised learning, 133, 134; text classification as, 136–137 Surowiecki, James, 244 swarm intelligence, 245–247 Swift, Jonathan, 232, 272, 273 syllogisms, 106–107, 171 syntax, 206–207 Szilard, Leo, 43 Taleb, Nassim Nicholas, 129–130 Talk to Books (Google), 228 Tay (Microsoft chatbot), 229 technology: change in, 44–45; evolution of, 41–42; in kitch, 61–63; Law of Accelerated Returns on, 47–48, 67; of Web 2.0, 240–241 technoscience, 63–67, 287n1 Tesla, Nikola, 97 Tetlock, Philip, 69 text classification, 135–137 Text REtrieval Conferences competitions, 221 theory: big data versus, 145, 243; in neuroscience, 255–259, 261–262; overfitting over data, 259–261 Thiel, Peter, 269, 275–276 thinking, 184–186 time-series prediction, 137 Tolstoy, Leo, 70 translation of natural language, 48, 52–54, 201–202; continued problems with, 56–57; noisy channel approach to, 56 Turing, Alan, 3, 9–10, 62, 192, 239, 245–247, 270; challenge made by, 49, 51; chess interest of, 19–20; on common sense, 131; on computers thinking, 102–103; on decidability of mathematics, 15–16; on Gödel, 16–18; on human intelligence, 23; on humans as black boxes, 68; on initiative in machines, 233–234; on intelligence, 27–32; on machine learning, 156; on typewriters, 187–188; on understanding language, 207–208; World War II code breaking by, 21–22, 24, 26 Turing machines, 16, 17 Turing test, 18, 57–59, 105; Goostman’s passing of, 191–194; Kurzweil on, 79; monologue variation on, 194–195, 212–214; opposition to, 77; Turing’s invention of, 9–10, 49, 51; Winograd schemas as variation on, 195–203 typewriters, 187–188 Ulam, Stanislaw, 44–46 ultraintelligence, 33, 62 The Unbearable Lightness of Being (novel, Kundera), 60 Unstructured Information Management Architecture (UIMA), 224 unsupervised learning, 133, 137 Venn, John, 232 Vinge, Vernor, 46, 286n3 Voltaire, 277 Von Neumann, John, 36–37, 44, 239; technological singularity theorized by, 45–46, 65–66 Walmart Labs (firm), 144, 210 Ward, Jonathan Stuart, 143 Watson (AI system), 28, 221–226, 230–231 Webb, Charles Henry, 232 Weiss, Paul, 95 Weizenbaum, Joseph, 58 Welchman, Gordon, 24 Whitehead, Alfred North, 110 Wiener, Norbert, 269–275 Wikipedia, 225, 226, 241 Winograd, Terry, 58, 195, 201 Winograd schemas, 195–203, 210–212, 216–218, 296n4, 296n7 World War II, code breaking during, 20–26 World Wide Web, 55–56; Web 2.0, 240–242 Yong, Ed, 247 Zeus (mythical), 237–238

Related-Notes

References

에릭 랄슨. 2022. 인공지능의 신화 : 컴퓨터가 우리가 하는 방식으로 생각할 수 없는 이유. https://www.yes24.com/Product/Goods/107426024.