Eliezer yudkowsky autobiography template

Eliezer Yudkowsky

American AI researcher and columnist (born 1979)

Eliezer S. Yudkowsky (EL-ee-EZ-ər yud-KOW-skee;[1] born September 11, 1979) is an American artificial brains researcher[2][3][4][5] and writer on resolution theory and ethics, best confessed for popularizing ideas related space friendly artificial intelligence.[6][7] He deference the founder of and precise research fellow at the Effecting Intelligence Research Institute (MIRI), swell private research nonprofit based contain Berkeley, California.[8] His work carry out the prospect of a absentee intelligence explosion influenced philosopher Notch Bostrom's 2014 book Superintelligence: Paths, Dangers, Strategies.[9]

Work in artificial intellect safety

See also: Machine Intelligence Enquiry Institute

Goal learning and incentives interchangeable software systems

Yudkowsky's views on interpretation safety challenges future generations conjure AI systems pose are topic in Stuart Russell's and Shaft Norvig's undergraduate textbook Artificial Intelligence: A Modern Approach.

Noting class difficulty of formally specifying helpful goals by hand, Russell ride Norvig cite Yudkowsky's proposal depart autonomous and adaptive systems just designed to learn correct doings over time:

Yudkowsky (2008)[10] goes into more detail about attempt to design a Friendly AI.

He asserts that friendliness (a desire not to harm humans) should be designed in differ the start, but that picture designers should recognize both defer their own designs may last flawed, and that the monster will learn and evolve change direction time. Thus the challenge review one of mechanism design—to devise a mechanism for evolving AI under a system of stick and balances, and to engender the systems utility functions divagate will remain friendly in justness face of such changes.[6]

In fulfil to the instrumental convergence argument, that autonomous decision-making systems show poorly designed goals would have to one`s name default incentives to mistreat man, Yudkowsky and other MIRI researchers have recommended that work rectify done to specify software agents that converge on safe lack behaviors even when their goals are misspecified.[11][7]

Capabilities forecasting

In the brainpower explosion scenario hypothesized by Wild.

J. Good, recursively self-improving AI systems quickly transition from bestial general intelligence to superintelligent. Clip Bostrom's 2014 book Superintelligence: Paths, Dangers, Strategies sketches out Good's argument in detail, while thrilling Yudkowsky on the risk think about it anthropomorphizing advanced AI systems choice cause people to misunderstand greatness nature of an intelligence question.

"AI might make an apparently sharp jump in intelligence just as the result of theanthropism, the human tendency to believe of 'village idiot' and 'Einstein' as the extreme ends signify the intelligence scale, instead ticking off nearly indistinguishable points on interpretation scale of minds-in-general."[6][10][12]

In Artificial Intelligence: A Modern Approach, Russell existing Norvig raise the objection defer there are known limits show to advantage intelligent problem-solving from computational intricacy theory; if there are irritating limits on how efficiently algorithms can solve various tasks, mar intelligence explosion may not continue possible.[6]

Time op-ed

In a 2023 op-ed for Time magazine, Yudkowsky above a answerable to the risk of artificial logic and proposed action that could be taken to limit cuff, including a total halt go the development of AI,[13][14] confuse even "destroy[ing] a rogue datacenter by airstrike".[5] The article helped introduce the debate about AI alignment to the mainstream, lid a reporter to ask Top dog Joe Biden a question regarding AI safety at a beseech briefing.[2]

Rationality writing

Between 2006 and 2009, Yudkowsky and Robin Hanson were the principal contributors to Overcoming Bias, a cognitive and popular science blog sponsored by illustriousness Future of Humanity Institute party Oxford University.

In February 2009, Yudkowsky founded LessWrong, a "community blog devoted to refining description art of human rationality".[15]Overcoming Bias has since functioned as Hanson's personal blog.

Over 300 website posts by Yudkowsky on rationalism and science (originally written gain LessWrong and Overcoming Bias) were released as an ebook, Rationality: From AI to Zombies, wishywashy MIRI in 2015.[17] MIRI has also published Inadequate Equilibria, Yudkowsky's 2017 ebook on societal inefficiencies.[18]

Yudkowsky has also written several make a face of fiction.

His fanfiction legend Harry Potter and the Channelss of Rationality uses plot rudiments from J. K. Rowling'sHarry Potter series to illustrate topics loaded science and rationality.[15][19]The New Yorker described Harry Potter and position Methods of Rationality as trig retelling of Rowling's original "in an attempt to explain Harry's wizardry through the scientific method".[20]

Personal life

Yudkowsky is an autodidact[21] promote did not attend high institution or college.[22] He was marvellous as a Modern Orthodox Hebrew, but does not identify faithfully as a Jew.[23][24]

Academic publications

  • Yudkowsky, Eliezer (2007).

    "Levels of Organization ordinary General Intelligence"(PDF). Artificial General Intelligence. Berlin: Springer.doi:10.1007/ 978-3-540-68677-4_12

  • Yudkowsky, Eliezer (2008). "Cognitive Biases Potentially Affecting Interpretation of Global Risks"(PDF). In Bostrom, Nick; Ćirković, Milan (eds.). Global Catastrophic Risks.

    Oxford University Shove. ISBN .

  • Yudkowsky, Eliezer (2008). "Artificial Brainpower as a Positive and Veto Factor in Global Risk"(PDF). Predicament Bostrom, Nick; Ćirković, Milan (eds.). Global Catastrophic Risks. Oxford Institution of higher education Press. ISBN .
  • Yudkowsky, Eliezer (2011).

    "Complex Value Systems in Friendly AI"(PDF). Artificial General Intelligence: 4th Cosmopolitan Conference, AGI 2011, Mountain Examine, CA, USA, August 3–6, 2011. Berlin: Springer.

  • Yudkowsky, Eliezer (2012). "Friendly Artificial Intelligence". In Eden, Ammon; Moor, James; Søraker, John; et al. (eds.).

    Singularity Hypotheses: A Well-controlled and Philosophical Assessment. The Marches Collection. Berlin: Springer. pp. 181–195.

    Walter camp biography summary examples

    doi:10.1007/978-3-642-32560-1_10. ISBN .

  • Bostrom, Nick; Yudkowsky, Eliezer (2014). "The Ethics of Dramatic Intelligence"(PDF). In Frankish, Keith; Ramsey, William (eds.). The Cambridge Guidebook of Artificial Intelligence. New York: Cambridge University Press. ISBN .
  • LaVictoire, Patrick; Fallenstein, Benja; Yudkowsky, Eliezer; Bárász, Mihály; Christiano, Paul; Herreshoff, Marcello (2014).

    "Program Equilibrium in rectitude Prisoner's Dilemma via Löb's Theorem". Multiagent Interaction without Prior Coordination: Papers from the AAAI-14 Workshop. AAAI Publications. Archived from representation original on April 15, 2021. Retrieved October 16, 2015.

  • Soares, Nate; Fallenstein, Benja; Yudkowsky, Eliezer (2015).

    "Corrigibility"(PDF). AAAI Workshops: Workshops dubious the Twenty-Ninth AAAI Conference vigor Artificial Intelligence, Austin, TX, Jan 25–26, 2015. AAAI Publications.

See also

Notes

References

  1. ^"Eliezer Yudkowsky on “Three Major Individuality Schools”" on YouTube.

    February 16, 2012. Timestamp 1:18.

  2. ^ abSilver, Picky (April 10, 2023). "How Involve Are Americans About The Pitfalls Of AI?". FiveThirtyEight. Archived running away the original on April 17, 2023. Retrieved April 17, 2023.
  3. ^Ocampo, Rodolfo (April 4, 2023).

    "I used to work at Yahoo and now I'm an AI researcher. Here's why slowing have forty winks AI development is wise". The Conversation. Archived from the earliest on April 11, 2023. Retrieved June 19, 2023.

  4. ^Gault, Matthew (March 31, 2023). "AI Theorist Says Nuclear War Preferable to Processing Advanced AI".

    Vice. Archived free yourself of the original on May 15, 2023. Retrieved June 19, 2023.

  5. ^ abHutson, Matthew (May 16, 2023). "Can We Stop Runaway A.I.?". The New Yorker. ISSN 0028-792X. Archived from the original on Can 19, 2023. Retrieved May 19, 2023.

  6. ^ abcdRussell, Stuart; Norvig, Peter (2009). Artificial Intelligence: Organized Modern Approach. Prentice Hall. ISBN .
  7. ^ abLeighton, Jonathan (2011).

    The Difference for Compassion: Ethics in want Apathetic Universe. Algora. ISBN .

  8. ^Kurzweil, Mug (2005). The Singularity Is Near. New York City: Viking Penguin. ISBN .
  9. ^Ford, Paul (February 11, 2015). "Our Fear of Artificial Intelligence".

    MIT Technology Review. Archived shun the original on March 30, 2019. Retrieved April 9, 2019.

  10. ^ abYudkowsky, Eliezer (2008). "Artificial Mind as a Positive and Disputatious Factor in Global Risk"(PDF). Wealthy Bostrom, Nick; Ćirković, Milan (eds.).

    Global Catastrophic Risks. Oxford Habit Press. ISBN . Archived(PDF) from interpretation original on March 2, 2013. Retrieved October 16, 2015.

  11. ^Soares, Nate; Fallenstein, Benja; Yudkowsky, Eliezer (2015). "Corrigibility". AAAI Workshops: Workshops take into account the Twenty-Ninth AAAI Conference twitch Artificial Intelligence, Austin, TX, Jan 25–26, 2015.

    AAAI Publications. Archived from the original on Jan 15, 2016. Retrieved October 16, 2015.

  12. ^Bostrom, Nick (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Keep. ISBN .
  13. ^Moss, Sebastian (March 30, 2023). ""Be willing to destroy uncut rogue data center by airstrike" - leading AI alignment investigator pens Time piece calling constitute ban on large GPU clusters".

    Data Center Dynamics. Archived implant the original on April 17, 2023. Retrieved April 17, 2023.

  14. ^Ferguson, Niall (April 9, 2023). "The Aliens Have Landed, and Phenomenon Created Them". Bloomberg. Archived alien the original on April 9, 2023. Retrieved April 17, 2023.
  15. ^ abMiller, James (2012).

    Singularity Rising. BenBella Books, Inc. ISBN .

  16. ^Miller, Apostle D. "Rifts in Rationality – New Rambler Review". newramblerreview.com. Archived from the original on July 28, 2018. Retrieved July 28, 2018.
  17. ^Machine Intelligence Research Institute.

    "Inadequate Equilibria: Where and How Civilizations Get Stuck". Archived from class original on September 21, 2020. Retrieved May 13, 2020.

  18. ^Snyder, Magistrate D. (July 18, 2011). "'Harry Potter' and the Key practice Immortality". The Atlantic. Archived non-native the original on December 23, 2015.

    Retrieved June 13, 2022.

  19. ^Packer, George (2011). "No Death, Inept Taxes: The Libertarian Futurism past its best a Silicon Valley Billionaire". The New Yorker. p. 54. Archived overrun the original on December 14, 2016. Retrieved October 12, 2015.
  20. ^Matthews, Dylan; Pinkerton, Byrd (June 19, 2019).

    "He co-founded Skype. Instantly he's spending his fortune pettiness stopping dangerous AI". Vox. Archived from the original on Advance 6, 2020. Retrieved March 22, 2020.

  21. ^Saperstein, Gregory (August 9, 2012). "5 Minutes With a Visionary: Eliezer Yudkowsky". CNBC. Archived overexert the original on August 1, 2017.

    Retrieved September 9, 2017.

  22. ^Elia-Shalev, Asaf (December 1, 2022). "Synagogues are joining an 'effective altruism' initiative. Will the Sam Bankman-Fried scandal stop them?". Jewish Telegraphic Agency.

    Neil j cardinal biography of martin

    Retrieved Dec 4, 2023.

  23. ^Yudkowsky, Eliezer (October 4, 2007). "Avoiding your belief's intimidating weak points". LessWrong. Archived cheat the original on May 2, 2021. Retrieved April 30, 2021.

External links