close
test_template

The Book "Superintelligence: Paths, Dangers, Strategies" by Nick Bostrom

Human-Written
download print

About this sample

About this sample

close
Human-Written

Words: 1209 |

Pages: 3|

7 min read

Published: Sep 20, 2018

Words: 1209|Pages: 3|7 min read

Published: Sep 20, 2018

Nick Bostrom in his book “Superintelligence: Paths, Dangers, Strategies” asks what will happen once we manage to build computers that are smarter than us, including what we need to do, how it is going to work, and why it has to be done the exact right way to make sure the human race does not go extinct. Will artificial agents in the end save or destroy us? Nick Bostrom lays the foundation for understanding the future of humanity and intelligent life. The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. If machine brains surpassed human brains in general intelligence, then this new superintelligence could become extremely powerful - possibly beyond our control. As the fate of the gorillas now depends more on humans than on the species itself, so would the fate of humankind depend on the actions of the machine superintelligence. Nevertheless, we have one advantage: we get to make the first move. Will it be possible to construct a seed artificial intelligence, to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation?

Nick Bostrom’s work reveals some concepts regarding those questions.

    1. In your opinion, what is the most interesting thought you encountered in the book?

In recent times, prominent figures such as Stephen Hawking, Bill Gates and Elon Musk have expressed serious concerns about the development of strong artificial intelligence technology, arguing that the dawn of super-intelligence might well bring about the end of humankind. Nick Bostrom in his book endeavours to shed some light on the subject and delves into quite a few particulars concerning the future of AI research.

The central argument of the book is the theory that the first superintelligence to be created will have a decisive first-mover advantage and, in a world where there is no other system remotely comparable, it will be very powerful. Such a system will shape the world according to its"preferences, and will probably be able to overcome any resistance that humans can put up. The bad news is that the preferences such an artificial agent could have will, if fully realized, involve the complete destruction of human life and most plausible human values. The default outcome, then, is catastrophe. In addition, Bostrom argues that we are not out of the woods even if his initial premise is false and a unipolar superintelligence never appears."Before the prospect of an intelligence explosion, he writes,"we humans are like small children playing with a bomb. It will, he says, be very difficult – but perhaps not impossible – to engineer a superintelligence with preferences that make it friendly to humans or able to be controlled.

So, will we create artificial agents, which will destroy us? Will the machines really be able to rebel against us? Frankly speaking, the idea about robots, AI agents, taking the control over humans is scaring itself. Therefore, apparently, humankind should apply itself those questions before we achieve super intelligent machines. I find this concept and idea utterly topical. Our world changes every minute, every second, and artificial agents are being developed more and more. Nick Bostrom’s “Superintelligence” tells what the consequences of the developing AI might be for humanity, but mostly, those consequences are shown as the bad ones. However, in my opinion, artificial superintelligence will be an entirely new kind of intelligent entity, and therefore, we must discover its all profits and advantages. Humanity’s first goal, over and above utilizing artificial intelligence for the betterment of our species, ought to be to respect and preserve the radical alterity and well-being of whatever artificial minds we create. Ultimately, I believe this approach will give us a greater chance of a peaceful coexistence with artificial superintelligence than any of the strategies for “control” (containment of abilities and actions) and “value loading” (getting AIs to understand and act in accordance with human values) outlined by Bostrom and other AI experts. We could use AI agents in our daily life, as well as in creating and engineering new technologies. Artificial intelligence will certainly automate some jobs, particularly those that rely on assembly lines or data collection. AI also will help businesses with high-speed customer demands -- conversational AI chatbots and other virtual assistants will manage the day-to-day flow of work. It is estimated that 85% of customer interactions will be managed by artificial intelligence by 2020. We see that AI agents can сonsiderably ease our lives.

  1. Is the prospect of achieving this type of Superintelligence realistic?

The idea of artificial superintelligence (ASI) has long tantalized and taunted the human imagination, but only in recent years have we begun to analyze in depth the technical, strategic, and ethical problems of creating as well as managing advanced artificial intelligence. Artificially intelligent agents are already replacing human jobs at the factories (for example, it is estimated that about 15% of American manufacturing jobs were lost to other countries, the remaining 85% was due to automation). It is replacing doctors in the diagnosis of illness. It is replacing taxi drivers. It is composing music. For instance, there is a hotel and a restaurant in Japan that is staffed almost entirely by robots. And, this is just the beginning. Nick Bostrom in his book approximates and states that it seems entirely feasible that we will have a more than human AI – a super intelligent AI – by the end of the century. However, scientists might be one of the few groups to actively suppress that desire to make predictions. Conservative and data-driven by nature, they might be uncomfortable making guesses about the future because that requires a leap of faith. Even if there is a lot of data to support a prediction, there are also infinite variables that can change the ultimate outcome in the interim. Trying to predict what the world will be like in a century does not do much to improve it today; if scientists are going to be wrong, they’d rather do it constructively. Indeed, the world has changed a lot in the past 100 years.

In 1918, much of the world was embroiled in the first World War. 1918 was also the year the influenza pandemic began to rage, ultimately claiming somewhere between 20-40 million lives — more than the war during which it took place. Congress established time zones, including Daylight Saving Time, and the first stamp for U.S. airmail was issued. Looking back, it is clear that we have made remarkable strides. These days, scientists do their best to achieve new results, in particular - teaching machines to think like humans. Recently, a new type of neuronal network has been built, which can dramatically improve the efficiency of teaching robots to think like we. The network, called a reservoir computing system, is made with memristors and could predict words before they are said during conversation, and help predict future outcomes based on the present. In addition, it can process a photo and correctly identify a human face, because it has learned the features of human faces from other photos in its training set.

Get a custom paper now from our expert writers.

Keeping this progress, I believe we will soon achieve an artificial superintelligence, which will be an entirely new kind of intelligent entity. Characters: 7512

Image of Alex Wood
This essay was reviewed by
Alex Wood

Cite this Essay

The book “Superintelligence: Paths, Dangers, Strategies” by Nick Bostrom. (2018, September 04). GradesFixer. Retrieved November 4, 2024, from https://gradesfixer.com/free-essay-examples/the-book-superintelligence-paths-dangers-strategies-by-nick-bostrom/
“The book “Superintelligence: Paths, Dangers, Strategies” by Nick Bostrom.” GradesFixer, 04 Sept. 2018, gradesfixer.com/free-essay-examples/the-book-superintelligence-paths-dangers-strategies-by-nick-bostrom/
The book “Superintelligence: Paths, Dangers, Strategies” by Nick Bostrom. [online]. Available at: <https://gradesfixer.com/free-essay-examples/the-book-superintelligence-paths-dangers-strategies-by-nick-bostrom/> [Accessed 4 Nov. 2024].
The book “Superintelligence: Paths, Dangers, Strategies” by Nick Bostrom [Internet]. GradesFixer. 2018 Sept 04 [cited 2024 Nov 4]. Available from: https://gradesfixer.com/free-essay-examples/the-book-superintelligence-paths-dangers-strategies-by-nick-bostrom/
copy
Keep in mind: This sample was shared by another student.
  • 450+ experts on 30 subjects ready to help
  • Custom essay delivered in as few as 3 hours
Write my essay

Still can’t find what you need?

Browse our vast selection of original essay samples, each expertly formatted and styled

close

Where do you want us to send this sample?

    By clicking “Continue”, you agree to our terms of service and privacy policy.

    close

    Be careful. This essay is not unique

    This essay was donated by a student and is likely to have been used and submitted before

    Download this Sample

    Free samples may contain mistakes and not unique parts

    close

    Sorry, we could not paraphrase this essay. Our professional writers can rewrite it and get you a unique paper.

    close

    Thanks!

    Please check your inbox.

    We can write you a custom essay that will follow your exact instructions and meet the deadlines. Let's fix your grades together!

    clock-banner-side

    Get Your
    Personalized Essay in 3 Hours or Less!

    exit-popup-close
    We can help you get a better grade and deliver your task on time!
    • Instructions Followed To The Letter
    • Deadlines Met At Every Stage
    • Unique And Plagiarism Free
    Order your paper now