Old page wikitext, before the edit (old_wikitext ) | '{{connected contributor (paid)|User1=Riceissa|U1-employer=[[User:Vipul]]|U1-otherlinks=https://en.wikipedia.org/w/index.php?title=Wikipedia:Conflict_of_interest/Noticeboard&oldid=770207380#Vipul.27s_paid_editing_enterprise}}
{{WikiProjectBannerShell|blp=yes|1=
{{WikiProject Biography |living=yes |class=Start |s&a-work-group=yes|s&a-priority=mid |listas=Bostrom, Nick}}
{{WikiProject Philosophy |mind=yes |ethics=yes |contemporary=yes |philosopher=yes |importance=mid |class=start}}
{{WikiProject Sweden |class=start |importance=low}}
{{WikiProject Transhumanism |class=C |importance=High}}
{{WikiProject Effective Altruism |class=C |importance=mid}}
{{WikiProject Alternative Views|class=C |importance=mid}}
{{WikiProject Futures studies |class=C |importance=High}}
{{WikiProject University of Oxford |class=C |importance=low}}
}}
{{Old AfD multi|page=Nick Bostrom|result='''keep'''}}
== Does this guy have a birth certificate? ==
The point being, it appears there is information missing in the Wikipedia entry about him that is false. Even if presented with a credible-looking birth certificate, I'm skeptical of this guy. He appears to be an extra-terrestrial good at making implied arguments. Also, as a person of whom has considered himself a Wikipedian, I am, in general, against Wikipedia entries about people themselves (I think it best such persons make a nice user page for themselves). However, if this guy is the philosopher of whom is alleged to have originally posited the computer simulation hypothesis, then I don't think it's a huge issue. It seems to me that he is an extra-terrestrial with experience on posthuman civilizations (thus arguing that #1 and #2 propositions are true of his propositions for a computer simulation) of whom is claiming the following: "We are living in a simulation which has been generated by our [probably not *his,* though] descendants for their own creation in the future." It might also be interpreted that he is an alien of whom is able to generate a signal that manages to get into isolated dimensions via a randomization process to inform persons, "If you're reading this right now, and there is no one or evidence to the contrary, despite the relative nature of space-time and reality to inform you otherwise, then I'd like to inform you that you're in a computer simulation right now." - Dennis Francis Blewett (January 26th, 2022)
== Name ==
Should the article be moved to [[Nick Boström]]? Nick is Swedish (unless he has changed citizenship lately) and that ''is'' his correct name. On the other hand, he himself uses "Bostrom" in the English-speaking world. —[[User:Naddy|Naddy]] 01:42, 21 Mar 2005 (UTC)
:FWIW, I've never seen it as anything but "Bostrom" (reading only English material), and since it is the spelling he himself uses in English leaving it here seems sensible on the English Wikipedia. I've created a redirect, though, and mentioned the original spelling, among other things. --[[User:Mindspillage|Mindspillage]] [[User talk:Mindspillage|(spill yours?)]] 02:39, 21 Mar 2005 (UTC)
::"Nick Bostrom (born Niklas Boström in 1973)" It sounds to me as he changed his surname from Boström to Bostrom. Is this the case? If not, that line should be rewritten. [[User:Ran4|Ran4]] 15:52, 24 July 2007 (UTC)
A quick Google showing only Swedish pages would indicate that he's called Nick Boström, but usually calls himself Bostrom. I might also note that Bostrom is pronounced entirely differently than Boström. [[User:Tubba Blubba|Tubba Blubba]] ([[User talk:Tubba Blubba|talk]]) 03:57, 23 June 2008 (UTC)
== Stand-up comedian ==
A pdf available here - http://www.spectrum.ieee.org/jun08/6272 - says that Nick had a short career as a stand-up comedian, which seems an unusual choice for a [[rapture]]-style transhumanist and a swede. Can the comedian information be confirmed? If so, it must be put into the article. [[User:Strangerstome|Strangerstome]] ([[User talk:Strangerstome|talk]]) 07:30, 8 June 2008 (UTC)
:So, why is the fact he is a swede contradicting the fact he used to be a stand up comedian? I remember him from some Swedish stand up comedy TV show in the early 90's where they occassionally exposed upcoming young talents. It seems like he continued his stand up comedy career in UK when he moved there - just take a look at his homepage where he mentions it. // '''Jens Persson''' ([[Special:Contributions/193.10.251.119|193.10.251.119]] ([[User talk:193.10.251.119|talk]]) 10:10, 30 June 2008 (UTC))
::What was his act? Was it funny? It just seems unusual, since Swedes are not known for their humor, and science and humor are not connected. This perhaps makes it notable. [[User:Strangerstome|Strangerstome]] ([[User talk:Strangerstome|talk]]) 23:08, 3 July 2008 (UTC)
:::A possible explanation for why Swedes might not be known for their humor is because most of the material is in Swedish it usually doesn't spread to other countries. And while science and humor might not be directly connected, that clearly doesn't exclude the possibility. But I agree that it seems odd for someone now focusing their effort on reducing existential risk. [[User:Erik.Bjareholt|Erik.Bjareholt]] ([[User talk:Erik.Bjareholt|talk]]) 18:58, 6 November 2014 (UTC)
== potential resource ==
[http://www.businessweek.com/magazine/guardians-of-the-apocalypse-12152011.html Guardians of the Apocalypse; The tech-nerd legion bent on saving humanity from asteroids, contagions, and robot revolutions] December 15, 2011, 4:30 PM EST by Ashlee Vance in [[BusinessWeek]], excerpt "Professor Nick Bostrom ranks various threats to mankind (Illustrations by QuickHoney)"
[[Special:Contributions/99.19.46.105|99.19.46.105]] ([[User talk:99.19.46.105|talk]]) 11:10, 28 December 2011 (UTC)
== Simulation Argument/Hypothesis ==
Firstly, this section should be named Simulation Argument as the simulation hypothesis refers only to the concept of the world as simulation, an idea not original to Bostrom. The important contribution by Bostrom is the argument resulting in his trilemma. Secondly, the statement "Because H will be such a large value, at least one of the three proximations will be true" is incorrect, as should be obvious to anyone glancing at the formula given above. H cancels from the formula and is thus irrelevant, permitting the statement of the trilemma. Also, the three propositions are not completely correct, they are stated in terms of absolutes, something the argument itself avoids because it cannot make such strong statements, it deals in averages not absolutes. [[User:Randomnonsense|Randomnonsense]] ([[User talk:Randomnonsense|talk]]) 22:42, 29 January 2012 (UTC)
Hah! Just noticed someone else edited the article to express their bafflement at "Because H will be such a large value, at least one of the three proximations will be true". Reading the description of the argument carefully I feel it would benefit from being entirely rewritten in a more compact and faithful way (to the original paper). For instance, why are the propositions not given as Bostrom himself gives them in his paper? The same for the definitions of fp, N, H and fsim. The commentary on the history of the hypothesis seems irrelevant when the article could simply point to the [[Simulation hypothesis]] article instead, it is after all an article on Bostrom not the history of the simulation hypothesis. It would also be nice to briefly mention the basis of his argument for empirical reasons to believe in the simulation hypothesis i.e. information content of human sensory perception and technological projections. Any update should also mention his recent paper on a bug in the argument, [http://www.simulation-argument.com/patch.pdf "A Patch for the Simulation Argument"]. The mention of the Strong Self-Sampling Assumption is also odd, considering that the argument explicitly doesn't utilize that assumption, as is evident from the formula used to derive the trilemma (H is the number of people not the number of observer moments). [[User:Randomnonsense|Randomnonsense]] ([[User talk:Randomnonsense|talk]]) 00:36, 30 January 2012 (UTC)
== Delete eventual fate? ==
I have some issues with the "eventual fate" section: first, it seems rather odd to have claims about future biographical information in an article. Second, and more seriously, the information is slightly incorrect - there was some errors in the Sunday Times article that triggered the information cascade the Oxford Today article is part of. Nick has actually *not* confirmed that he is signed up. Of course, by now there will be plenty of articles making the claim based on the original article, so it will look like a confirmed fact when it isn't. I suggest that we remove the eventual fate section, but I do have some misgivings that the claim will reappear. [[User:Anders Sandberg|Anders Sandberg]] ([[User talk:Anders Sandberg|talk]]) 07:14, 12 July 2013 (UTC)
:I had a look at this comment by Anders and looked at the original source and agreed that this section is problematic. First, aside from Anders' complaints, the title "Eventual Fate" is strange and nonstandard for biographies, and seems suspiciously tongue-in-cheek joke, given that Bostrom studies the future and thereby the "fate" of humanity. Second, the sentence itself is worded weirdly - people don't normally state that they have "agreed to pay" for a service. Third, this seems like a bit of strange hearsay that seems inessential to providing important information about the person in question. Fourth, there seems to be questions about the reliability of the source itself, and I don't see any other references here supporting it - but even if there are, those sources may track back to the same unreliable foundations. I don't see why this deserves its own section and even if the statement about cryonics is retained with that reference it should be moved to another section (with the 'Eventual Fate' section removed)and reworded.[[User:LanceSBush|LanceSBush]] ([[User talk:LanceSBush|talk]]) 14:20, 1 August 2013 (UTC)
::I agree with Sandberg and Bush. The section is inappropriate, poorly worded and factually incorrect. I am removing it. [[User:Sir Paul|Sir Paul]] ([[User talk:Sir Paul|talk]]) 14:34, 1 August 2013 (UTC)
==Television==
The television section feels a bit odd; Nick is on TV fairly often as a public intellectual - those examples are just a scattered handful. [[User:Anders Sandberg|Anders Sandberg]] ([[User talk:Anders Sandberg|talk]]) 17:40, 18 October 2014 (UTC)
== External links modified ==
Hello fellow Wikipedians,
I have just added archive links to {{plural:2|one external link|2 external links}} on [[Nick Bostrom]]. Please take a moment to review [https://en.wikipedia.org/w/index.php?diff=prev&oldid=686402427 my edit]. If necessary, add {{tlx|cbignore}} after the link to keep me from modifying it. Alternatively, you can add {{tlx|nobots|deny{{=}}InternetArchiveBot}} to keep me off the page altogether. I made the following changes:
*Added archive https://web.archive.org/20091220070749/http://gannonaward.org:80/The_Gannon_Award/The_Gannon_Group.html to http://gannonaward.org/The_Gannon_Award/The_Gannon_Group.html
*Added archive https://web.archive.org/20091101061716/http://www.fhi.ox.ac.uk:80/archive/2009/eugene_r._gannon_award_for_the_continued_pursuit_of_human_advancement to http://www.fhi.ox.ac.uk/archive/2009/eugene_r._gannon_award_for_the_continued_pursuit_of_human_advancement
When you have finished reviewing my changes, please set the ''checked'' parameter below to '''true''' to let others know.
{{sourcecheck|checked=true}}
Cheers. —[[User:Cyberbot II|<sup style="color:green;font-family:Courier">cyberbot II</sup>]]<small><sub style="margin-left:-14.9ex;color:green;font-family:Comic Sans MS">[[User talk:Cyberbot II|<span style="color:green">Talk to my owner</span>]]:Online</sub></small> 22:58, 18 October 2015 (UTC)
== Nick Bostrom not a futurologist?! ==
Recently [[User:Apollo The Logician]] [https://en.wikipedia.org/w/index.php?title=Nick_Bostrom&type=revision&diff=754117886&oldid=754116934 removed] [[:Category:Futurologists]] from the article, saying "Not a futurologist".
I contest this removal in that I'm relatively sure that Nick Bostrom can, should and is considered a futurologist. See the definition at [[futurologist]]: "futurists or futurologists are scientists and social scientists whose specialty is futurology '''''or the attempt to systematically explore predictions and possibilities about the future and how they can emerge from the present''''', whether that of human society in particular or of life on Earth in general."
That's exactly what Bostrom is doing in most of his studies.
Maybe "futurologist" has a bad connotation for some users here? It doesn't have to and that's no reason to not add it.
He's also been called a futurologist by multiple sources: [http://www.newyorker.com/books/joshua-rothman/what-are-the-odds-we-are-living-in-a-computer-simulation], [https://books.google.de/books?id=ken5CQAAQBAJ&pg=PA145], [https://books.google.de/books?id=cydGlD0edysC&pg=PT5], [http://www.concatenation.org/nfrev/bostrom_superintelligence.html], [https://books.google.de/books?id=JMyClio8BmUC&pg=PA400], [https://seanduke.com/2014/06/04/whats-it-all-about-on-rte-radio-1-life-death-beyond-episode-3/], [http://cecs.louisville.edu/ry/LeakproofingtheSingularity.pdf], [...]
Also of relevance here: [[List of futurologists]].
--[[:User:Fixuture|'''F'''ix'''uture''']] ([[:User talk:Fixuture|talk]]) 13:34, 11 December 2016 (UTC)
:Fair enough [[User:Apollo The Logician|Apollo The Logician]] ([[User talk:Apollo The Logician|talk]]) 13:36, 11 December 2016 (UTC)
::It's an irrelevant point because he's categorised (2x) in [[:Category:Transhumanists]], a subcategory of [[:Category:Futurologists]]. Per [[WP:SUBCAT]], the latter category ought to be removed again. -- [[User:Michael Bednarek|Michael Bednarek]] ([[User talk:Michael Bednarek|talk]]) 01:43, 12 December 2016 (UTC)
:I think "futurologist" applies to people who try to extrapolate future trends in a lot of different domains. Otherwise a typical stock-market analyst is a futurologist too. The first three [[WP:RS]] on a Google news search for "Nick Bostrom" gave me [http://www.newsweek.com/nick-bostrom-google-winning-artificial-intelligence-arms-race-red-button-506624][https://www.theguardian.com/technology/2016/jun/12/nick-bostrom-artificial-intelligence-machine][http://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom], none of which describe him as a futurologist. So I don't think Bostrom fits under futurology per [[WP:NONDEF]], as it's not a ''consistently-used'' description. [[User:Rolf h nelson|Rolf H Nelson]] ([[User talk:Rolf h nelson|talk]]) 06:05, 16 December 2016 (UTC)
:"Futurologist" does have a moderately bad connotation in the scientific community! This probably means the description should be used sparingly, not just from Bostrom but for everybody, per [[WP:LABEL]]. [[User:Rolf h nelson|Rolf H Nelson]] ([[User talk:Rolf h nelson|talk]]) 06:03, 16 December 2016 (UTC)
::{{ping|Rolf h nelson}} Per [[User:Michael Bednarek]] I think it's irrelevant, anyways: your Google search doesn't look like a honest attempt to check whether he's been called a futurologist. The top 3 Google search results ''not'' calling him so is a sincerely ridiculous argument I never heard here until now. I already listed 7 sources calling him so and there are more - I think that should be enough.
::And I already supposed that the term has a bad connotation in the scientific community and hence I addressed that earlier as well! To expand on the "it doesn't have to": if people like Bostrom aren't called futurologists despite them being so no wonder why the term keeps up with its bad connotation: people like him should be showcases how future studies ''can'' be approached in a serious and useful manner.
::--[[:User:Fixuture|'''F'''ix'''uture''']] ([[:User talk:Fixuture|talk]]) 20:46, 21 December 2016 (UTC)
:::Thanks Fixuture, perhaps I'm not communicating my point well, so let me try to clarify. I agree 100% that Bostrom ''has been called'' a futurologist in occasional [[WP:RS]]. Stephen Hawking ''has been called'' a visionary, and a [http://www.telegraph.co.uk/news/uknews/3248858/Stephen-Hawking-to-retire-as-Cambridges-Professor-of-Mathematics.html mathematician]. We don't put Stephen Hawking in a category called 'visionaries' nor 'mathematician', per [[WP:NONDEF]], because he's not ''consistently described'' as either of those. The fact that 'visionary' additionally is a [[WP:LABEL]] (in this case, a positive one) is an additional reason not to put him in such a category, or perhaps to not even have such a category. My personal opinion, which could be wrong, is that there isn't enough consistent description of him as a futurologist to merit adding him to the category, which I consider to be a fuzzy category. [[User:Rolf h nelson|Rolf H Nelson]] ([[User talk:Rolf h nelson|talk]]) 02:01, 22 December 2016 (UTC)
:::"I addressed that earlier as well!" Yes, I saw, but the thing I would've liked you to expand on is the more relevant "that's no reason to not add it" and not the "it doesn't have to" part. :-) That said, I don't feel strongly about it; if you feel strongly, then unless other editors speak up and state they think [[WP:NONDEF]] a significant issue here, I'm conceding to the change. [[User:Rolf h nelson|Rolf H Nelson]] ([[User talk:Rolf h nelson|talk]]) 02:01, 22 December 2016 (UTC)
== Sourcing issues and OR ==
This article seems to contain a quite a bit of OR and self sourcing. Please cite from RS before reverting. Thanks. [[User:Inlinetext|Inlinetext]] ([[User talk:Inlinetext|talk]]) 19:06, 13 March 2017 (UTC)
:Primary sources are considered reliable sources.[[User:Apollo The Logician|Apollo The Logician]] ([[User talk:Apollo The Logician|talk]]) 19:26, 13 March 2017 (UTC)
:That is incorrect. See [[WP:PSTS]]. To help you, I am setting out the relevant text '' All analyses and interpretive or synthetic claims about primary sources must be referenced to a secondary source, and must not be an original analysis of the primary-source material by Wikipedia editors.''. Please provide reliable secondary sources. Since this is about a BLP I am obliged by policy to immediately remove the controversial text to protect the encyclopedia and the trust of our readers. [[User:Inlinetext|Inlinetext]] ([[User talk:Inlinetext|talk]]) 19:40, 13 March 2017 (UTC)
::The key words their are interpretative and synethetic. Simply saying what he says sumarising in different words is not those things.[[User:Apollo The Logician|Apollo The Logician]] ([[User talk:Apollo The Logician|talk]]) 19:43, 13 March 2017 (UTC)
:::Please provide a reliable secondary source for these alleged summaries in ''different words''. You cannot force an editor to read the book / paper to verify these summaries. If the work /BLP is so notable surely there would be secondary sources, and lots of them ? [[User:Inlinetext|Inlinetext]] ([[User talk:Inlinetext|talk]]) 19:52, 13 March 2017 (UTC)
:::: FYI, [[WP:3RRBLP]] ''Removing violations of the biographies of living persons (BLP) policy that contain libelous, biased, unsourced, or '''poorly sourced contentious material'''. What counts as exempt under BLP can be controversial. Consider reporting to the BLP noticeboard instead of relying on this exemption.'' [[User:Inlinetext|Inlinetext]] ([[User talk:Inlinetext|talk]]) 19:52, 13 March 2017 (UTC)
:Inlinetext is a long term disruptive user. He wasted a huge amount of time at [[Geodesics on an ellipsoid]] falsely claiming copyright violations and "original research". He also deleted more than half of [[Stanton Foundation]] and [[Parker Conrad]] without a good reason. I am reverting back to the edit before Inlinetext's first edit. [[User:Jrheller1|Jrheller1]] ([[User talk:Jrheller1|talk]]) 19:56, 13 March 2017 (UTC)
::{{ec}} I am applying Wikipedia policy to this article, you are edit warring by reverting me on this BLP. Now please quickly find reliable secondary (preferably scholarly showing wide peer acceptance) sources for all this OR /SYNTH and SPS which I have clearly identified, instead of raising red herrings and HARASSING / STALKING me. You are of course aware that this is exactly the sort of dubious poorly sourced EA puffery which is littering Wikipedia inserted by paid EA advocacy editors like Vipul, Issarice ? [[User:Inlinetext|Inlinetext]] ([[User talk:Inlinetext|talk]]) 20:04, 13 March 2017 (UTC)
:::Since there is no response or citation of authentic secondary sources, I shall proceed to delete those poorly sourced controversial portions in this BLP I had previously excised. I am doing this under the BLP policy to safeguard the integrity of this article from Original Research, synthesis, and cite-fraud. [[User:Inlinetext|Inlinetext]] ([[User talk:Inlinetext|talk]]) 03:39, 15 March 2017 (UTC)
{{od}}{{re|Apollo The Logician}} re: '' BLP and OR doesnt apply to primary sources'' and ''Primary sources are considered reliable sources'', could you point me to the 'basis' on this, or set it out here ? [[User:Inlinetext|Inlinetext]] ([[User talk:Inlinetext|talk]]) 18:30, 15 March 2017 (UTC)
:Not "pseudo-science" ? You may want to see [http://www.math.columbia.edu/~woit/wordpress/?p=583 "far beyond the edge of absurd"]. [[User:Inlinetext|Inlinetext]] ([[User talk:Inlinetext|talk]]) 18:30, 15 March 2017 (UTC)
:I am not exactly sure what you mean by that but read [[WP:PRIMARY]] and [[WP:RS]]. Primary sources are considered reliable though it is better to use secondary etc.[[User:Apollo The Logician|Apollo The Logician]] ([[User talk:Apollo The Logician|talk]]) 20:06, 15 March 2017 (UTC)
::I suggest that you have misread a stray phrase from inapplicable sections of those policies. For BLPs the sourcing policy is [[WP:BLPSOURCE]], eg [[WP:BLPREMOVE]]. [[User:Inlinetext|Inlinetext]] ([[User talk:Inlinetext|talk]]) 03:15, 16 March 2017 (UTC)
:::For this topic there are a large number of reputably published primary sources as well as primary sources which have been extensively discussed by secondary sources and can therefore be used for augmentation. The book ''Superintelligence'', for instance, has received multiple reviews not only in the press but also in academic journals. If material is lacking proper sourcing then flag it and suitable sourcing material will probably be found. [[User: Kbog|'''<font color="black">K</font>''']].[[User talk:Kbog|'''<font color="blue">Bog</font>''']] 11:18, 17 March 2017 (UTC)
* The removals have been very much undue, especially since they were done without any attempt to build consensus on the talk page or find better sourcing. I have restored most of the removed material with the benefit of secondary sources, since that seems to be the latest pet peeve of deletionists. In the future, you can just write "this needs secondary sourcing" or "this primary source is not being used appropriately", and I can find such a source and we'll all be able to go home happy, or alternatively we can verify that the sources are in fact being used appropriately as per [[WP:PRIMARY]]. In my mind, removing improperly sourced material without first verifying that there is no viable source available is grounds for a speedy revert. FWIW, I have read many of these primary sources. [[User: Kbog|'''<font color="black">K</font>''']].[[User talk:Kbog|'''<font color="blue">Bog</font>''']] 10:26, 17 March 2017 (UTC)
== Lede emphasising he is known for the AI argument ==
The lede is cluttered with info that does not belong, cluttering up the article. The lede should be emphasising the reason that Bostrom has his own page and be intriguing enough to make readers would want to read through the main body of the article That reason is his AI concerns and his main arguments from the AI book should be touched on. Critical responses should also be mentioned. Refs should not be in the lede anyway. [[User:Overagainst|Overagainst]] ([[User talk:Overagainst|talk]]) 15:36, 28 October 2017 (UTC)
:I disagree. The previous lead paragraph gave a more comprehensive overview of Bostrom's work, in line with [[MOS:LEAD]]. Your version focusses only on AI. On a formal level, the lead now links 3 times the term [[superintelligence]] – see [[WP:REPEATLINK]], but omits a link to [[Instrumental convergence]] where [[paperclip maximizer]] is explained. <!-- Template:Unsigned --><small class="autosigned">— Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[User:Michael Bednarek|Michael Bednarek]] ([[User talk:Michael Bednarek#top|talk]] • [[Special:Contributions/Michael Bednarek|contribs]]) 05:12, 29 October 2017 (UTC)</small>
::I agree with Michael. [[User:GojiBarry|GojiBarry]] ([[User talk:GojiBarry|talk]]) 01:54, 5 February 2018 (UTC)
::{{ping|Fixuture|Rolf h nelson|Kbog}} Do you have any thoughts on this change in the lede? [[User:GojiBarry|GojiBarry]] ([[User talk:GojiBarry|talk]]) 01:25, 11 February 2018 (UTC)
:::Most Wikipedia BLP leads are bland CVs because that's the easiest way to do things, but MOS:LEAD doesn't really support that; rather MOS:LEAD states "Like in the body of the article itself, the emphasis given to material in the lead should roughly reflect its importance to the topic, according to reliable, published sources", so whether to even mention the anthropic principle, the reversal test, and consequentialism in the lede is a judgement call which I don't have a strong opinion about. Most of the MSM media coverage appears to be about superintelligence, followed moderately distantly by the simulation argument. Looking at [https://scholar.google.com/citations?user=oQwpz3QAAAAJ&hl=en], human enhancement ethics should definitely be mentioned in the lede. I don't know what "touching on" the main arguments would entail, but it would probably be difficult to be concise, non-technical, and NPOV in the lede; this isn't a large article and therefore wouldn't need a four-paragraph lede. [[User:Rolf h nelson|Rolf H Nelson]] ([[User talk:Rolf h nelson|talk]]) 02:03, 12 February 2018 (UTC)
:::I liked the old intro better and from a pure style point of view, if you elaborate on his AI theories, then it should be at the end of the second paragraph rather than right up front. The way it's written now is kind of weird and jarring, and it detracts from his other notable work. Definitely don't exclude mention of the other topics besides AI, sure pop media doesn't talk about them as much but they are still popular and have decent academic recognition. Remember one of the main purposes of an introduction is to tell the reader why the subject is notable; talking about what he believes doesn't tell us why he is a notable person, and talking about what he believes right in the beginning of the introduction forces the reader to go further in order to find information about why he is notable. I don't have a problem with elaborating on AI if it's at the end of the introduction. [[User: Kbog|'''<font color="black">K</font>''']].[[User talk:Kbog|'''<font color="blue">Bog</font>''']] 05:20, 13 February 2018 (UTC)
:::Also, I don't see any reason to mention critical responses to his theories in the lede - that's not important for telling the reader who he is, why he's notable, etc. As far as I can tell we don't normally do that, even for researchers with theories as controversial as, say, [[Karl Marx]]. Refs in the intro are also fine, I don't see any problem with them. [[User: Kbog|'''<font color="black">K</font>''']].[[User talk:Kbog|'''<font color="blue">Bog</font>''']] 05:27, 13 February 2018 (UTC)
:::To be clear: this version of the intro was good, people can add one or two sentences about the end going into detail about superintelligence if they want. [https://en.wikipedia.org/w/index.php?title=Nick_Bostrom&oldid=818453835] [[User: Kbog|'''<font color="black">K</font>''']].[[User talk:Kbog|'''<font color="blue">Bog</font>''']] 05:35, 13 February 2018 (UTC)
== External links modified ==
Hello fellow Wikipedians,
I have just modified one external link on [[Nick Bostrom]]. Please take a moment to review [[special:diff/814804925|my edit]]. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit [[User:Cyberpower678/FaQs#InternetArchiveBot|this simple FaQ]] for additional information. I made the following changes:
*Added {{tlx|dead link}} tag to http://www.fhi.ox.ac.uk/__data/assets/pdf_file/0019/5923/How_Unlikely_is_a_Doomsday_Catastrophe_plus_Supplementary_Materials.pdf
*Added archive https://web.archive.org/web/20120127111542/http://www.anthropic-principle.com/book/anthropicbias.pdf to http://www.anthropic-principle.com/book/anthropicbias.pdf
When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.
{{sourcecheck|checked=false|needhelp=}}
Cheers.—[[User:InternetArchiveBot|'''<span style="color:darkgrey;font-family:monospace">InternetArchiveBot</span>''']] <span style="color:green;font-family:Rockwell">([[User talk:InternetArchiveBot|Report bug]])</span> 01:11, 11 December 2017 (UTC)
==Musk==
Musk merely said Borstom's book Super intelligence is "worth reading". Musk has warned about one country's advanced strategically programmed computer starting a nuclear world war three in a preempting attempt to prevent the state it is part of the defences for from being defeated, but he has never ever mentioned the possibility of AI deciding that its own very particular interests are best served by killing off ''all'' humanity, which is the main concern that Bostrom is raising. [[Eliezer Yudkowsky]] sketched a scenario for how an artificial intelligence could fulfill the worst fears of Borstom.
Musk's thinking is not similar to Bostom's at all. And he is spreading open source AI all over the globe, which does not sound like someone worried about a scenario in which AI bootstraps itself into a undercover Supeintelligence with only one logical move left; a three act play in which humans disappear during the second act. [[User:Overagainst|Overagainst]] ([[User talk:Overagainst|talk]]) 20:24, 12 February 2018 (UTC)
:You have a good point that the sources provided, besides being weak, are indeed ambiguous about what kind of existential risk Musk believes AI poses. I added [https://www.vanityfair.com/news/2017/03/elon-musk-billion-dollar-crusade-to-stop-ai-space-x vanity fair], which states "Musk, Stephen Hawking, and Bill Gates are all raising the same warning about A.I." and quotes Musk in more detail to make it clearer that Musk does claim to be strongly motivated by concern about superintelligence deciding to kill off humanity: “Let’s say you create a self-improving A.I. to pick strawberries,” Musk said, “and it gets better and better at picking strawberries and picks more and more and it is self-improving, so all it really wants to do is pick strawberries. So then it would have all the world be strawberry fields. Strawberry fields forever.” [[User:Rolf h nelson|Rolf H Nelson]] ([[User talk:Rolf h nelson|talk]]) 05:26, 13 February 2018 (UTC)
== Problems with body ==
I see problems with the body.
1 - too much writing on superintelligence. It's fine to give it much more weight than the other topics, in accordance with its greater notability. But right now it is just a thick summary of his book with some disparate excerpts and ideas. Also, there is too much attention given to certain aspects of the book, like way too much info on the illustrative scenario, which is only a small part of his book. The article needs to zoom out and look at his research/career as a whole, in context of other researchers and ideas. Right now it is too zoomed in, extracting lots of details from his writing.
2 - organization is off, lots of things are listed under "Superintelligence" even though they are different topics. The little "Philosophy" section only has x-risk even though x-risk is not really philosophy at all.
3 - I don't like some of the writing, I think it could do with some copyediting. Should have less jargon and technical concepts, less examples and quotes from the book, more generality and more clarification of exactly what makes his ideas different and notable compared to other ideas.
I thought I had watchlisted this page, maybe I was just busy when all these edits are made. I see [[User:Overagainst]] contributed a lot here and I was not paying attention before it was all completed. Well I intend to change it up but I'm posting here first in case anyone has anything to say. [[User: Kbog|'''<font color="black">K</font>''']].[[User talk:Kbog|'''<font color="blue">Bog</font>''']] 05:50, 13 February 2018 (UTC)
::If you want to expand the philosophy section and have it solely on Bostrom's analytic philosophy background and ideas, I see no problem. However, Bostrom is notable for his Superintelligence book and existential risk institute. Any article on him that did not give that a lot of weight and explaination would be badly flawed, and I don't think it should go back to being like that. Here is what you had
::[https://en.wikipedia.org/w/index.php?title=Nick_Bostrom&oldid=770756593]'''In his 2014 book Superintelligence: Paths, Dangers, Strategies, Bostrom reasoned that with "the creation of a superintelligent being represents a possible means to the extinction of mankind", and "there are actions that can be taken to reduce this risk," such as "the creation of a ‘friendly’ superintelligent being."[20] In January 2015, Bostrom joined Stephen Hawking among others in signing the Future of Life Institute's open letter warning of the potential dangers of AI.[21] The signatories "...believe that research on how to make AI systems robust and beneficial is both important and timely, and that concrete research should be pursued today."[22]'''
::You left it completely vague as to why Bostom thinks there is a risk and the massive difficulties he sees in creating such a being. And you did not tell the reader that Bostroms book notes that any number of possible aims an AI could have might converge on the single instrumental one of exterminating humanity. The takeover scenario is a distillation of Bostoms's book, in which repeatedly raises the specter of an AI defeating various attempts to hardwire program it to be friendly. Throughout the book Bostrom is pointing out flaws in Yudkowsky's ideas on control. Those are the technical parts of the book that I did not think i could do justice to. Perhaps you can. However the takover scenario is Bostroms most notable argument and without it you are misleading readers about what he thinks the risk is. I suggest you write a philosophy section (long as you like) and put it here for discussion. Same with the Superintelligence section. [[User:Overagainst|Overagainst]] ([[User talk:Overagainst|talk]]) 20:15, 13 February 2018 (UTC)
:::[[WP:SUMMARY]] doesn't prohibit have a large summary section that duplicates the content of [[Existential risk from artificial general intelligence]], but IMHO given finite editor resources and given the presumably low reader traffic to this page, the simplest thing would be to have the summary go back to being just a paragraph as the reader can link through to the main article if they want to learn more. [[User:Rolf h nelson|Rolf H Nelson]] ([[User talk:Rolf h nelson|talk]]) 06:17, 14 February 2018 (UTC)
::::You have done a pretty good job at [[Existential risk from artificial general intelligence]] but the article lacks some things this one has, and you seem to want it to be the go to page for Bostom. Yet there is no link to Bostom in the lede. Now you want to chop this article down. But this one has things that [[Existential risk from artificial general intelligence]] lacks. You came here and cut out mention of a point that Bostom has repeatedly made in interviews, namely the Fermi paradox. Something he has also done in a book Global Catastrophic Risks Nick Bostrom, Milan M. Cirkovic - 2011 which has several pages on Fermi's paradox. So I am afraid it '''is''' one of his main arguments as far as he is concerned. Another thing Bostrom mentions in his book and I put here but isn't on your edit of the main AI risk page is John Von Neumann and Bertrand Russell advocating during the 40s that the US threaten or use its nuke monopoly to eliminate any possible future nuclear threat from the USSR. Kindness or manevolence doesn't really come into it, and in my opinion the basic argument is not "If superintelligent AI is possible, and if it is possible for a superintelligence's goals to conflict with basic human values, then AI poses a risk of human extinction." at all. Obviously John Von Neumann and Bertrand Russell did not oppose human values, they just took what they saw as a realistic view of the situation and tried to eliminate a threat to the entity they were part of. Bostrom's book seems to be making the point that a super-AI acting absolutely rationally might well be led by logic to try to eliminate the potential threat that humans represent, and it will be able to surreptitiously manufacture nanotech weapons to do it. In contrast your AI risk article is not explicit about the threat of a strategising super AI deliberate extermination of humanity scenario at all, and you don't say anything about the means. So IMO the Bostrom page has things the supposedly main article on the subject lacks despite its very great length.[[User:Overagainst|Overagainst]] ([[User talk:Overagainst|talk]]) 15:45, 14 February 2018 (UTC)
:::::Bostrom's elaboration of Robin Hanson's [[Great Filter]] can certainly go in this article if it's well-sourced from reliable secondary sources, as can the nuke monopoly point. If it's only from primary sources about Bostrom's beliefs such as the book or from interviews, we should be more cautious; if nobody besides Bostrom has found a point interesting enough to talk about, then it may be that our readers won't find it interesting or insightful either. Re existential risk from artificial intelligence, The Great Filter only makes one appearance in one footnote in Chapter 13, so I don't think it's a main part of Bostrom's AI arguments nowadays.
:::::Proceeding to "eliminate the potential threat that humans represent" against the AI would seem to me be a conflict with basic human values, but you and other editors should always feel free to edit the existential risk from AI page since if you consider it unclear I'm sure the average reader is probably utterly baffled. [[User:Rolf h nelson|Rolf H Nelson]] ([[User talk:Rolf h nelson|talk]]) 04:56, 16 February 2018 (UTC)
::::::I disagree, on ''his'' page if Bostrom has said something in a non-self-published book, we most certainly do ''not'' need reliable secondary sources that have cited his book as a ref; Bostrom's book is sufficient as a ref for him saying what he said in it. It has to be made very clear that it is Bostrom's opinion and similarly or possibly more qualified people disagree with him, and maybe that needs emphasizing more. In his latest book, Daniel Dennett (who seems to have quite a bit of knowledge of the AI milieu) mentions Bostom by name in the text (not just footnotes) and Dennett's last chapter is bout AI and the final words in the book are about how it is in humanity's hands to control the development of AI, and prevent a strong AI takeover. Dennett says that strong AI/incipient super intelligence is not worth worrying about as it could not be here for at least 50 years, although that is only 40 years off of Bostrom's lowest estimate.[[User:Overagainst|Overagainst]] ([[User talk:Overagainst|talk]]) 23:14, 16 February 2018 (UTC)
:::::::The problem is we have a dense 350 page book about superintelligence, so if an argument inside the book (a) only occupies one footnote, and (b) that specific argument is ignored by the rest of the world, it probably isn't going to be interesting to our readers either. Otherwise we would end up with hundreds of pages. It's better when Wikipedia articles are based mainly on reliable secondary sources. [[User:Rolf h nelson|Rolf H Nelson]] ([[User talk:Rolf h nelson|talk]]) 20:29, 17 February 2018 (UTC)
::::::Not unclear to an educated person, but the ERfAI page is maybe a little dry, especially the lede.
:::::::You mean ''[[From Bacteria to Bach and Back]]'', yes? [[User:Martinevans123|Martinevans123]] ([[User talk:Martinevans123|talk]]) 23:55, 16 February 2018 (UTC)
::::::::[[User:Martinevans123|Martinevans123]], Yes Bacteria to Bach Hisearlier books used the example of weak AI to teach Darwinism. Dennett admits he is now more "tentative" about strong AI being unfeasible in the forseeable future, but he still thinks it would cost too much and not give us anything we need. In Bacteria to Bach Dennett is equivicating on what Bostrom is ultimately worried about. Dennett only scoffs at the he prospect of super-AI as overlords, while Bostrom is very definitely predicting super-AI as the possible exterminator of humanity. Moreover Dennett thinks the film ''Ex Machina'' is about Turing test type moral problems as in the earlier ''Her'', but take away the female form and gamine appeal of the AI in ''Ex Machina'', and it seems much more about a manipulative AI breakout and treacherous turn. Again, Dennett ends the book with a para about how if the future follows the trajectory of the past, AI will never be independent of human control.[[User:Overagainst|Overagainst]] ([[User talk:Overagainst|talk]]) 20:20, 18 February 2018 (UTC)
::::::::[[User:Rolf h nelson|Rolf H Nelson]], It is not a footnote it is several pages, not in Superintelligence, but in another book (Global Catastrophic Risks Nick Bostrom, Milan M. Cirkovic - 2011) that has several pages on Fermi's paradox. So I am afraid it '''is''' one of his main arguments as far as he is concerned. And Fermi's paradox is also mentioned in [https://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom Khatchadourian]'s New Yorker article, which Dennett cites in his latest book as an example of "alarming" predictions about the future course of AI. Bostrom's Fermi paradox/ great Filter idea has had enough notice taken of it to be included here as one of his arguments I think. It has to be made clear that it is just his opinion of course>[[User:Overagainst|Overagainst]] ([[User talk:Overagainst|talk]]) 20:42, 18 February 2018 (UTC)
:::::::::You said that the article on xRisk from AI doesn't mention the Great Filter/Fermi paradox, but the Fermi paradox is a minor xrisk argument for things like gray goo but not for most AI takeover, since the paperclip maker would carry on colonizing the universe. Bostrom brings it up in the context of xrisk as a whole (or in the context of anthropic reasoning), rather than specifically superintelligence xrisk, and so does the article. Feel free to expand the mention under existential risk and/or mention it in the lead if you want to, but I don't think it belongs under superintelligence.[[User:Rolf h nelson|Rolf H Nelson]] ([[User talk:Rolf h nelson|talk]]) 04:50, 22 February 2018 (UTC)
:::Whatever the article used to look like is kind of irrelevant. I never did core writing for this whole article, I just contributed to it. And if the ERfAI article is missing something then we should just go improve that, not try to patch up its omissions in a different article. [[User: Kbog|'''<font color="black">K</font>''']].[[User talk:Kbog|'''<font color="blue">Bog</font>''']] 06:10, 26 February 2018 (UTC)
== External links modified (February 2018) ==
Hello fellow Wikipedians,
I have just modified one external link on [[Nick Bostrom]]. Please take a moment to review [[special:diff/826338798|my edit]]. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit [[User:Cyberpower678/FaQs#InternetArchiveBot|this simple FaQ]] for additional information. I made the following changes:
*Added archive https://web.archive.org/web/20141021111122/http://www.foreignpolicy.com/articles/2009/11/30/the_fp_top_100_global_thinkers?page=0,30 to https://foreignpolicy.com/articles/2009/11/30/the_fp_top_100_global_thinkers?page=0,30
When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.
{{sourcecheck|checked=false|needhelp=}}
Cheers.—[[User:InternetArchiveBot|'''<span style="color:darkgrey;font-family:monospace">InternetArchiveBot</span>''']] <span style="color:green;font-family:Rockwell">([[User talk:InternetArchiveBot|Report bug]])</span> 15:55, 18 February 2018 (UTC)
== Citation for analytic philosophy ==
{{ping|Kbog}} There are plenty of primary sources associating Bostrom with the analytic tradition. For example, his [https://nickbostrom.com homepage] and [https://nickbostrom.com/cv.pdf CV] mention it, and his old homepage was even [http://www.analytic.org analytic.org: "Nick Bostrom's thinking in analytic philosophy"]! As for secondary sources: [https://books.google.com/books?hl=en&lr=&id=PejV2ViHcXIC&oi=fnd&pg=PA4&dq=nick+bostrom+analytic+philosopher&ots=TyW3fNOjdC&sig=kEqVvvuK13CQW_VlEf3m51uClew#v=onepage&q=analytic&f=false], [https://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom], [https://books.google.com/books?id=cLK8CgAAQBAJ&pg=PT59&lpg=PT59&dq=nick+bostrom+analytic+philosopher&source=bl&ots=NnGEdsCfED&sig=1NjZl2zv75U8nCGJzhu3jX8A70A&hl=en&sa=X&ved=0ahUKEwiP5avwydHZAhUQ3GMKHQUKC784ChDoAQhGMAc#v=onepage&q=analytic&f=false]. [[User:GojiBarry|GojiBarry]] ([[User talk:GojiBarry|talk]]) 02:06, 4 March 2018 (UTC)' |
New page wikitext, after the edit (new_wikitext ) | '{{connected contributor (paid)|User1=Riceissa|U1-employer=[[User:Vipul]]|U1-otherlinks=https://en.wikipedia.org/w/index.php?title=Wikipedia:Conflict_of_interest/Noticeboard&oldid=770207380#Vipul.27s_paid_editing_enterprise}}
{{WikiProjectBannerShell|blp=yes|1=
{{WikiProject Biography |living=yes |class=Start |s&a-work-group=yes|s&a-priority=mid |listas=Bostrom, Nick}}
{{WikiProject Philosophy |mind=yes |ethics=yes |contemporary=yes |philosopher=yes |importance=mid |class=start}}
{{WikiProject Sweden |class=start |importance=low}}
{{WikiProject Transhumanism |class=C |importance=High}}
{{WikiProject Effective Altruism |class=C |importance=mid}}
{{WikiProject Alternative Views|class=C |importance=mid}}
{{WikiProject Futures studies |class=C |importance=High}}
{{WikiProject University of Oxford |class=C |importance=low}}
}}
{{Old AfD multi|page=Nick Bostrom|result='''keep'''}}
== Does this guy have a birth certificate? ==
The point being, it appears there is information missing in the Wikipedia entry about him that is false. Even if presented with a credible-looking birth certificate, I'm skeptical of this guy. He appears to be an extra-terrestrial good at making implied arguments. Also, as a person of whom has considered himself a Wikipedian, I am, in general, against Wikipedia entries about people themselves (I think it best such persons make a nice user page for themselves). However, if this guy is the philosopher of whom is alleged to have originally posited the computer simulation hypothesis, then I don't think it's a huge issue. It seems to me that he is an extra-terrestrial with experience on posthuman civilizations (thus arguing that #1 and #2 propositions are true of his propositions for a computer simulation) of whom is claiming the following: "We are living in a simulation which has been generated by our [probably not *his,* though] descendants for their own creation in the future." It might also be interpreted that he is an alien of whom is able to generate a signal that manages to get into isolated dimensions via a randomization process to inform persons, "If you're reading this right now, and there is no one or evidence to the contrary, despite the relative nature of space-time and reality to inform you otherwise, then I'd like to inform you that you're in a computer simulation right now." - Dennis Francis Blewett (January 26th, 2022)
== Name ==
Should the article be moved to [[Nick Boström]]? Nick is Swedish (unless he has changed citizenship lately) and that ''is'' his correct name. On the other hand, he himself uses "Bostrom" in the English-speaking world. —[[User:Naddy|Naddy]] 01:42, 21 Mar 2005 (UTC)
:FWIW, I've never seen it as anything but "Bostrom" (reading only English material), and since it is the spelling he himself uses in English leaving it here seems sensible on the English Wikipedia. I've created a redirect, though, and mentioned the original spelling, among other things. --[[User:Mindspillage|Mindspillage]] [[User talk:Mindspillage|(spill yours?)]] 02:39, 21 Mar 2005 (UTC)
::"Nick Bostrom (born Niklas Boström in 1973)" It sounds to me as he changed his surname from Boström to Bostrom. Is this the case? If not, that line should be rewritten. [[User:Ran4|Ran4]] 15:52, 24 July 2007 (UTC)
A quick Google showing only Swedish pages would indicate that he's called Nick Boström, but usually calls himself Bostrom. I might also note that Bostrom is pronounced entirely differently than Boström. [[User:Tubba Blubba|Tubba Blubba]] ([[User talk:Tubba Blubba|talk]]) 03:57, 23 June 2008 (UTC)
== Stand-up comedian ==
A pdf available here - http://www.spectrum.ieee.org/jun08/6272 - says that Nick had a short career as a stand-up comedian, which seems an unusual choice for a [[rapture]]-style transhumanist and a swede. Can the comedian information be confirmed? If so, it must be put into the article. [[User:Strangerstome|Strangerstome]] ([[User talk:Strangerstome|talk]]) 07:30, 8 June 2008 (UTC)
:So, why is the fact he is a swede contradicting the fact he used to be a stand up comedian? I remember him from some Swedish stand up comedy TV show in the early 90's where they occassionally exposed upcoming young talents. It seems like he continued his stand up comedy career in UK when he moved there - just take a look at his homepage where he mentions it. // '''Jens Persson''' ([[Special:Contributions/193.10.251.119|193.10.251.119]] ([[User talk:193.10.251.119|talk]]) 10:10, 30 June 2008 (UTC))
::What was his act? Was it funny? It just seems unusual, since Swedes are not known for their humor, and science and humor are not connected. This perhaps makes it notable. [[User:Strangerstome|Strangerstome]] ([[User talk:Strangerstome|talk]]) 23:08, 3 July 2008 (UTC)
:::A possible explanation for why Swedes might not be known for their humor is because most of the material is in Swedish it usually doesn't spread to other countries. And while science and humor might not be directly connected, that clearly doesn't exclude the possibility. But I agree that it seems odd for someone now focusing their effort on reducing existential risk. [[User:Erik.Bjareholt|Erik.Bjareholt]] ([[User talk:Erik.Bjareholt|talk]]) 18:58, 6 November 2014 (UTC)
== potential resource ==
[http://www.businessweek.com/magazine/guardians-of-the-apocalypse-12152011.html Guardians of the Apocalypse; The tech-nerd legion bent on saving humanity from asteroids, contagions, and robot revolutions] December 15, 2011, 4:30 PM EST by Ashlee Vance in [[BusinessWeek]], excerpt "Professor Nick Bostrom ranks various threats to mankind (Illustrations by QuickHoney)"
[[Special:Contributions/99.19.46.105|99.19.46.105]] ([[User talk:99.19.46.105|talk]]) 11:10, 28 December 2011 (UTC)
== Simulation Argument/Hypothesis ==
Firstly, this section should be named Simulation Argument as the simulation hypothesis refers only to the concept of the world as simulation, an idea not original to Bostrom. The important contribution by Bostrom is the argument resulting in his trilemma. Secondly, the statement "Because H will be such a large value, at least one of the three proximations will be true" is incorrect, as should be obvious to anyone glancing at the formula given above. H cancels from the formula and is thus irrelevant, permitting the statement of the trilemma. Also, the three propositions are not completely correct, they are stated in terms of absolutes, something the argument itself avoids because it cannot make such strong statements, it deals in averages not absolutes. [[User:Randomnonsense|Randomnonsense]] ([[User talk:Randomnonsense|talk]]) 22:42, 29 January 2012 (UTC)
Hah! Just noticed someone else edited the article to express their bafflement at "Because H will be such a large value, at least one of the three proximations will be true". Reading the description of the argument carefully I feel it would benefit from being entirely rewritten in a more compact and faithful way (to the original paper). For instance, why are the propositions not given as Bostrom himself gives them in his paper? The same for the definitions of fp, N, H and fsim. The commentary on the history of the hypothesis seems irrelevant when the article could simply point to the [[Simulation hypothesis]] article instead, it is after all an article on Bostrom not the history of the simulation hypothesis. It would also be nice to briefly mention the basis of his argument for empirical reasons to believe in the simulation hypothesis i.e. information content of human sensory perception and technological projections. Any update should also mention his recent paper on a bug in the argument, [http://www.simulation-argument.com/patch.pdf "A Patch for the Simulation Argument"]. The mention of the Strong Self-Sampling Assumption is also odd, considering that the argument explicitly doesn't utilize that assumption, as is evident from the formula used to derive the trilemma (H is the number of people not the number of observer moments). [[User:Randomnonsense|Randomnonsense]] ([[User talk:Randomnonsense|talk]]) 00:36, 30 January 2012 (UTC)
== Delete eventual fate? ==
I have some issues with the "eventual fate" section: first, it seems rather odd to have claims about future biographical information in an article. Second, and more seriously, the information is slightly incorrect - there was some errors in the Sunday Times article that triggered the information cascade the Oxford Today article is part of. Nick has actually *not* confirmed that he is signed up. Of course, by now there will be plenty of articles making the claim based on the original article, so it will look like a confirmed fact when it isn't. I suggest that we remove the eventual fate section, but I do have some misgivings that the claim will reappear. [[User:Anders Sandberg|Anders Sandberg]] ([[User talk:Anders Sandberg|talk]]) 07:14, 12 July 2013 (UTC)
:I had a look at this comment by Anders and looked at the original source and agreed that this section is problematic. First, aside from Anders' complaints, the title "Eventual Fate" is strange and nonstandard for biographies, and seems suspiciously tongue-in-cheek joke, given that Bostrom studies the future and thereby the "fate" of humanity. Second, the sentence itself is worded weirdly - people don't normally state that they have "agreed to pay" for a service. Third, this seems like a bit of strange hearsay that seems inessential to providing important information about the person in question. Fourth, there seems to be questions about the reliability of the source itself, and I don't see any other references here supporting it - but even if there are, those sources may track back to the same unreliable foundations. I don't see why this deserves its own section and even if the statement about cryonics is retained with that reference it should be moved to another section (with the 'Eventual Fate' section removed)and reworded.[[User:LanceSBush|LanceSBush]] ([[User talk:LanceSBush|talk]]) 14:20, 1 August 2013 (UTC)
::I agree with Sandberg and Bush. The section is inappropriate, poorly worded and factually incorrect. I am removing it. [[User:Sir Paul|Sir Paul]] ([[User talk:Sir Paul|talk]]) 14:34, 1 August 2013 (UTC)
==Television==
The television section feels a bit odd; Nick is on TV fairly often as a public intellectual - those examples are just a scattered handful. [[User:Anders Sandberg|Anders Sandberg]] ([[User talk:Anders Sandberg|talk]]) 17:40, 18 October 2014 (UTC)
== External links modified ==
Hello fellow Wikipedians,
I have just added archive links to {{plural:2|one external link|2 external links}} on [[Nick Bostrom]]. Please take a moment to review [https://en.wikipedia.org/w/index.php?diff=prev&oldid=686402427 my edit]. If necessary, add {{tlx|cbignore}} after the link to keep me from modifying it. Alternatively, you can add {{tlx|nobots|deny{{=}}InternetArchiveBot}} to keep me off the page altogether. I made the following changes:
*Added archive https://web.archive.org/20091220070749/http://gannonaward.org:80/The_Gannon_Award/The_Gannon_Group.html to http://gannonaward.org/The_Gannon_Award/The_Gannon_Group.html
*Added archive https://web.archive.org/20091101061716/http://www.fhi.ox.ac.uk:80/archive/2009/eugene_r._gannon_award_for_the_continued_pursuit_of_human_advancement to http://www.fhi.ox.ac.uk/archive/2009/eugene_r._gannon_award_for_the_continued_pursuit_of_human_advancement
When you have finished reviewing my changes, please set the ''checked'' parameter below to '''true''' to let others know.
{{sourcecheck|checked=true}}
Cheers. —[[User:Cyberbot II|<sup style="color:green;font-family:Courier">cyberbot II</sup>]]<small><sub style="margin-left:-14.9ex;color:green;font-family:Comic Sans MS">[[User talk:Cyberbot II|<span style="color:green">Talk to my owner</span>]]:Online</sub></small> 22:58, 18 October 2015 (UTC)
== Nick Bostrom not a futurologist?! ==
Recently [[User:Apollo The Logician]] [https://en.wikipedia.org/w/index.php?title=Nick_Bostrom&type=revision&diff=754117886&oldid=754116934 removed] [[:Category:Futurologists]] from the article, saying "Not a futurologist".
I contest this removal in that I'm relatively sure that Nick Bostrom can, should and is considered a futurologist. See the definition at [[futurologist]]: "futurists or futurologists are scientists and social scientists whose specialty is futurology '''''or the attempt to systematically explore predictions and possibilities about the future and how they can emerge from the present''''', whether that of human society in particular or of life on Earth in general."
That's exactly what Bostrom is doing in most of his studies.
Maybe "futurologist" has a bad connotation for some users here? It doesn't have to and that's no reason to not add it.
He's also been called a futurologist by multiple sources: [http://www.newyorker.com/books/joshua-rothman/what-are-the-odds-we-are-living-in-a-computer-simulation], [https://books.google.de/books?id=ken5CQAAQBAJ&pg=PA145], [https://books.google.de/books?id=cydGlD0edysC&pg=PT5], [http://www.concatenation.org/nfrev/bostrom_superintelligence.html], [https://books.google.de/books?id=JMyClio8BmUC&pg=PA400], [https://seanduke.com/2014/06/04/whats-it-all-about-on-rte-radio-1-life-death-beyond-episode-3/], [http://cecs.louisville.edu/ry/LeakproofingtheSingularity.pdf], [...]
Also of relevance here: [[List of futurologists]].
--[[:User:Fixuture|'''F'''ix'''uture''']] ([[:User talk:Fixuture|talk]]) 13:34, 11 December 2016 (UTC)
:Fair enough [[User:Apollo The Logician|Apollo The Logician]] ([[User talk:Apollo The Logician|talk]]) 13:36, 11 December 2016 (UTC)
::It's an irrelevant point because he's categorised (2x) in [[:Category:Transhumanists]], a subcategory of [[:Category:Futurologists]]. Per [[WP:SUBCAT]], the latter category ought to be removed again. -- [[User:Michael Bednarek|Michael Bednarek]] ([[User talk:Michael Bednarek|talk]]) 01:43, 12 December 2016 (UTC)
:I think "futurologist" applies to people who try to extrapolate future trends in a lot of different domains. Otherwise a typical stock-market analyst is a futurologist too. The first three [[WP:RS]] on a Google news search for "Nick Bostrom" gave me [http://www.newsweek.com/nick-bostrom-google-winning-artificial-intelligence-arms-race-red-button-506624][https://www.theguardian.com/technology/2016/jun/12/nick-bostrom-artificial-intelligence-machine][http://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom], none of which describe him as a futurologist. So I don't think Bostrom fits under futurology per [[WP:NONDEF]], as it's not a ''consistently-used'' description. [[User:Rolf h nelson|Rolf H Nelson]] ([[User talk:Rolf h nelson|talk]]) 06:05, 16 December 2016 (UTC)
:"Futurologist" does have a moderately bad connotation in the scientific community! This probably means the description should be used sparingly, not just from Bostrom but for everybody, per [[WP:LABEL]]. [[User:Rolf h nelson|Rolf H Nelson]] ([[User talk:Rolf h nelson|talk]]) 06:03, 16 December 2016 (UTC)
::{{ping|Rolf h nelson}} Per [[User:Michael Bednarek]] I think it's irrelevant, anyways: your Google search doesn't look like a honest attempt to check whether he's been called a futurologist. The top 3 Google search results ''not'' calling him so is a sincerely ridiculous argument I never heard here until now. I already listed 7 sources calling him so and there are more - I think that should be enough.
::And I already supposed that the term has a bad connotation in the scientific community and hence I addressed that earlier as well! To expand on the "it doesn't have to": if people like Bostrom aren't called futurologists despite them being so no wonder why the term keeps up with its bad connotation: people like him should be showcases how future studies ''can'' be approached in a serious and useful manner.
::--[[:User:Fixuture|'''F'''ix'''uture''']] ([[:User talk:Fixuture|talk]]) 20:46, 21 December 2016 (UTC)
:::Thanks Fixuture, perhaps I'm not communicating my point well, so let me try to clarify. I agree 100% that Bostrom ''has been called'' a futurologist in occasional [[WP:RS]]. Stephen Hawking ''has been called'' a visionary, and a [http://www.telegraph.co.uk/news/uknews/3248858/Stephen-Hawking-to-retire-as-Cambridges-Professor-of-Mathematics.html mathematician]. We don't put Stephen Hawking in a category called 'visionaries' nor 'mathematician', per [[WP:NONDEF]], because he's not ''consistently described'' as either of those. The fact that 'visionary' additionally is a [[WP:LABEL]] (in this case, a positive one) is an additional reason not to put him in such a category, or perhaps to not even have such a category. My personal opinion, which could be wrong, is that there isn't enough consistent description of him as a futurologist to merit adding him to the category, which I consider to be a fuzzy category. [[User:Rolf h nelson|Rolf H Nelson]] ([[User talk:Rolf h nelson|talk]]) 02:01, 22 December 2016 (UTC)
:::"I addressed that earlier as well!" Yes, I saw, but the thing I would've liked you to expand on is the more relevant "that's no reason to not add it" and not the "it doesn't have to" part. :-) That said, I don't feel strongly about it; if you feel strongly, then unless other editors speak up and state they think [[WP:NONDEF]] a significant issue here, I'm conceding to the change. [[User:Rolf h nelson|Rolf H Nelson]] ([[User talk:Rolf h nelson|talk]]) 02:01, 22 December 2016 (UTC)
== Sourcing issues and OR ==
This article seems to contain a quite a bit of OR and self sourcing. Please cite from RS before reverting. Thanks. [[User:Inlinetext|Inlinetext]] ([[User talk:Inlinetext|talk]]) 19:06, 13 March 2017 (UTC)
:Primary sources are considered reliable sources.[[User:Apollo The Logician|Apollo The Logician]] ([[User talk:Apollo The Logician|talk]]) 19:26, 13 March 2017 (UTC)
:That is incorrect. See [[WP:PSTS]]. To help you, I am setting out the relevant text '' All analyses and interpretive or synthetic claims about primary sources must be referenced to a secondary source, and must not be an original analysis of the primary-source material by Wikipedia editors.''. Please provide reliable secondary sources. Since this is about a BLP I am obliged by policy to immediately remove the controversial text to protect the encyclopedia and the trust of our readers. [[User:Inlinetext|Inlinetext]] ([[User talk:Inlinetext|talk]]) 19:40, 13 March 2017 (UTC)
::The key words their are interpretative and synethetic. Simply saying what he says sumarising in different words is not those things.[[User:Apollo The Logician|Apollo The Logician]] ([[User talk:Apollo The Logician|talk]]) 19:43, 13 March 2017 (UTC)
:::Please provide a reliable secondary source for these alleged summaries in ''different words''. You cannot force an editor to read the book / paper to verify these summaries. If the work /BLP is so notable surely there would be secondary sources, and lots of them ? [[User:Inlinetext|Inlinetext]] ([[User talk:Inlinetext|talk]]) 19:52, 13 March 2017 (UTC)
:::: FYI, [[WP:3RRBLP]] ''Removing violations of the biographies of living persons (BLP) policy that contain libelous, biased, unsourced, or '''poorly sourced contentious material'''. What counts as exempt under BLP can be controversial. Consider reporting to the BLP noticeboard instead of relying on this exemption.'' [[User:Inlinetext|Inlinetext]] ([[User talk:Inlinetext|talk]]) 19:52, 13 March 2017 (UTC)
:Inlinetext is a long term disruptive user. He wasted a huge amount of time at [[Geodesics on an ellipsoid]] falsely claiming copyright violations and "original research". He also deleted more than half of [[Stanton Foundation]] and [[Parker Conrad]] without a good reason. I am reverting back to the edit before Inlinetext's first edit. [[User:Jrheller1|Jrheller1]] ([[User talk:Jrheller1|talk]]) 19:56, 13 March 2017 (UTC)
::{{ec}} I am applying Wikipedia policy to this article, you are edit warring by reverting me on this BLP. Now please quickly find reliable secondary (preferably scholarly showing wide peer acceptance) sources for all this OR /SYNTH and SPS which I have clearly identified, instead of raising red herrings and HARASSING / STALKING me. You are of course aware that this is exactly the sort of dubious poorly sourced EA puffery which is littering Wikipedia inserted by paid EA advocacy editors like Vipul, Issarice ? [[User:Inlinetext|Inlinetext]] ([[User talk:Inlinetext|talk]]) 20:04, 13 March 2017 (UTC)
:::Since there is no response or citation of authentic secondary sources, I shall proceed to delete those poorly sourced controversial portions in this BLP I had previously excised. I am doing this under the BLP policy to safeguard the integrity of this article from Original Research, synthesis, and cite-fraud. [[User:Inlinetext|Inlinetext]] ([[User talk:Inlinetext|talk]]) 03:39, 15 March 2017 (UTC)
{{od}}{{re|Apollo The Logician}} re: '' BLP and OR doesnt apply to primary sources'' and ''Primary sources are considered reliable sources'', could you point me to the 'basis' on this, or set it out here ? [[User:Inlinetext|Inlinetext]] ([[User talk:Inlinetext|talk]]) 18:30, 15 March 2017 (UTC)
:Not "pseudo-science" ? You may want to see [http://www.math.columbia.edu/~woit/wordpress/?p=583 "far beyond the edge of absurd"]. [[User:Inlinetext|Inlinetext]] ([[User talk:Inlinetext|talk]]) 18:30, 15 March 2017 (UTC)
:I am not exactly sure what you mean by that but read [[WP:PRIMARY]] and [[WP:RS]]. Primary sources are considered reliable though it is better to use secondary etc.[[User:Apollo The Logician|Apollo The Logician]] ([[User talk:Apollo The Logician|talk]]) 20:06, 15 March 2017 (UTC)
::I suggest that you have misread a stray phrase from inapplicable sections of those policies. For BLPs the sourcing policy is [[WP:BLPSOURCE]], eg [[WP:BLPREMOVE]]. [[User:Inlinetext|Inlinetext]] ([[User talk:Inlinetext|talk]]) 03:15, 16 March 2017 (UTC)
:::For this topic there are a large number of reputably published primary sources as well as primary sources which have been extensively discussed by secondary sources and can therefore be used for augmentation. The book ''Superintelligence'', for instance, has received multiple reviews not only in the press but also in academic journals. If material is lacking proper sourcing then flag it and suitable sourcing material will probably be found. [[User: Kbog|'''<font color="black">K</font>''']].[[User talk:Kbog|'''<font color="blue">Bog</font>''']] 11:18, 17 March 2017 (UTC)
* The removals have been very much undue, especially since they were done without any attempt to build consensus on the talk page or find better sourcing. I have restored most of the removed material with the benefit of secondary sources, since that seems to be the latest pet peeve of deletionists. In the future, you can just write "this needs secondary sourcing" or "this primary source is not being used appropriately", and I can find such a source and we'll all be able to go home happy, or alternatively we can verify that the sources are in fact being used appropriately as per [[WP:PRIMARY]]. In my mind, removing improperly sourced material without first verifying that there is no viable source available is grounds for a speedy revert. FWIW, I have read many of these primary sources. [[User: Kbog|'''<font color="black">K</font>''']].[[User talk:Kbog|'''<font color="blue">Bog</font>''']] 10:26, 17 March 2017 (UTC)
== Lede emphasising he is known for the AI argument ==
The lede is cluttered with info that does not belong, cluttering up the article. The lede should be emphasising the reason that Bostrom has his own page and be intriguing enough to make readers would want to read through the main body of the article That reason is his AI concerns and his main arguments from the AI book should be touched on. Critical responses should also be mentioned. Refs should not be in the lede anyway. [[User:Overagainst|Overagainst]] ([[User talk:Overagainst|talk]]) 15:36, 28 October 2017 (UTC)
:I disagree. The previous lead paragraph gave a more comprehensive overview of Bostrom's work, in line with [[MOS:LEAD]]. Your version focusses only on AI. On a formal level, the lead now links 3 times the term [[superintelligence]] – see [[WP:REPEATLINK]], but omits a link to [[Instrumental convergence]] where [[paperclip maximizer]] is explained. <!-- Template:Unsigned --><small class="autosigned">— Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[User:Michael Bednarek|Michael Bednarek]] ([[User talk:Michael Bednarek#top|talk]] • [[Special:Contributions/Michael Bednarek|contribs]]) 05:12, 29 October 2017 (UTC)</small>
::I agree with Michael. [[User:GojiBarry|GojiBarry]] ([[User talk:GojiBarry|talk]]) 01:54, 5 February 2018 (UTC)
::{{ping|Fixuture|Rolf h nelson|Kbog}} Do you have any thoughts on this change in the lede? [[User:GojiBarry|GojiBarry]] ([[User talk:GojiBarry|talk]]) 01:25, 11 February 2018 (UTC)
:::Most Wikipedia BLP leads are bland CVs because that's the easiest way to do things, but MOS:LEAD doesn't really support that; rather MOS:LEAD states "Like in the body of the article itself, the emphasis given to material in the lead should roughly reflect its importance to the topic, according to reliable, published sources", so whether to even mention the anthropic principle, the reversal test, and consequentialism in the lede is a judgement call which I don't have a strong opinion about. Most of the MSM media coverage appears to be about superintelligence, followed moderately distantly by the simulation argument. Looking at [https://scholar.google.com/citations?user=oQwpz3QAAAAJ&hl=en], human enhancement ethics should definitely be mentioned in the lede. I don't know what "touching on" the main arguments would entail, but it would probably be difficult to be concise, non-technical, and NPOV in the lede; this isn't a large article and therefore wouldn't need a four-paragraph lede. [[User:Rolf h nelson|Rolf H Nelson]] ([[User talk:Rolf h nelson|talk]]) 02:03, 12 February 2018 (UTC)
:::I liked the old intro better and from a pure style point of view, if you elaborate on his AI theories, then it should be at the end of the second paragraph rather than right up front. The way it's written now is kind of weird and jarring, and it detracts from his other notable work. Definitely don't exclude mention of the other topics besides AI, sure pop media doesn't talk about them as much but they are still popular and have decent academic recognition. Remember one of the main purposes of an introduction is to tell the reader why the subject is notable; talking about what he believes doesn't tell us why he is a notable person, and talking about what he believes right in the beginning of the introduction forces the reader to go further in order to find information about why he is notable. I don't have a problem with elaborating on AI if it's at the end of the introduction. [[User: Kbog|'''<font color="black">K</font>''']].[[User talk:Kbog|'''<font color="blue">Bog</font>''']] 05:20, 13 February 2018 (UTC)
:::Also, I don't see any reason to mention critical responses to his theories in the lede - that's not important for telling the reader who he is, why he's notable, etc. As far as I can tell we don't normally do that, even for researchers with theories as controversial as, say, [[Karl Marx]]. Refs in the intro are also fine, I don't see any problem with them. [[User: Kbog|'''<font color="black">K</font>''']].[[User talk:Kbog|'''<font color="blue">Bog</font>''']] 05:27, 13 February 2018 (UTC)
:::To be clear: this version of the intro was good, people can add one or two sentences about the end going into detail about superintelligence if they want. [https://en.wikipedia.org/w/index.php?title=Nick_Bostrom&oldid=818453835] [[User: Kbog|'''<font color="black">K</font>''']].[[User talk:Kbog|'''<font color="blue">Bog</font>''']] 05:35, 13 February 2018 (UTC)
== External links modified ==
Hello fellow Wikipedians,
I have just modified one external link on [[Nick Bostrom]]. Please take a moment to review [[special:diff/814804925|my edit]]. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit [[User:Cyberpower678/FaQs#InternetArchiveBot|this simple FaQ]] for additional information. I made the following changes:
*Added {{tlx|dead link}} tag to http://www.fhi.ox.ac.uk/__data/assets/pdf_file/0019/5923/How_Unlikely_is_a_Doomsday_Catastrophe_plus_Supplementary_Materials.pdf
*Added archive https://web.archive.org/web/20120127111542/http://www.anthropic-principle.com/book/anthropicbias.pdf to http://www.anthropic-principle.com/book/anthropicbias.pdf
When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.
{{sourcecheck|checked=false|needhelp=}}
Cheers.—[[User:InternetArchiveBot|'''<span style="color:darkgrey;font-family:monospace">InternetArchiveBot</span>''']] <span style="color:green;font-family:Rockwell">([[User talk:InternetArchiveBot|Report bug]])</span> 01:11, 11 December 2017 (UTC)
==Musk==
Musk merely said Borstom's book Super intelligence is "worth reading". Musk has warned about one country's advanced strategically programmed computer starting a nuclear world war three in a preempting attempt to prevent the state it is part of the defences for from being defeated, but he has never ever mentioned the possibility of AI deciding that its own very particular interests are best served by killing off ''all'' humanity, which is the main concern that Bostrom is raising. [[Eliezer Yudkowsky]] sketched a scenario for how an artificial intelligence could fulfill the worst fears of Borstom.
Musk's thinking is not similar to Bostom's at all. And he is spreading open source AI all over the globe, which does not sound like someone worried about a scenario in which AI bootstraps itself into a undercover Supeintelligence with only one logical move left; a three act play in which humans disappear during the second act. [[User:Overagainst|Overagainst]] ([[User talk:Overagainst|talk]]) 20:24, 12 February 2018 (UTC)
:You have a good point that the sources provided, besides being weak, are indeed ambiguous about what kind of existential risk Musk believes AI poses. I added [https://www.vanityfair.com/news/2017/03/elon-musk-billion-dollar-crusade-to-stop-ai-space-x vanity fair], which states "Musk, Stephen Hawking, and Bill Gates are all raising the same warning about A.I." and quotes Musk in more detail to make it clearer that Musk does claim to be strongly motivated by concern about superintelligence deciding to kill off humanity: “Let’s say you create a self-improving A.I. to pick strawberries,” Musk said, “and it gets better and better at picking strawberries and picks more and more and it is self-improving, so all it really wants to do is pick strawberries. So then it would have all the world be strawberry fields. Strawberry fields forever.” [[User:Rolf h nelson|Rolf H Nelson]] ([[User talk:Rolf h nelson|talk]]) 05:26, 13 February 2018 (UTC)
== Problems with body ==
I see problems with the body.
1 - too much writing on superintelligence. It's fine to give it much more weight than the other topics, in accordance with its greater notability. But right now it is just a thick summary of his book with some disparate excerpts and ideas. Also, there is too much attention given to certain aspects of the book, like way too much info on the illustrative scenario, which is only a small part of his book. The article needs to zoom out and look at his research/career as a whole, in context of other researchers and ideas. Right now it is too zoomed in, extracting lots of details from his writing.
2 - organization is off, lots of things are listed under "Superintelligence" even though they are different topics. The little "Philosophy" section only has x-risk even though x-risk is not really philosophy at all.
3 - I don't like some of the writing, I think it could do with some copyediting. Should have less jargon and technical concepts, less examples and quotes from the book, more generality and more clarification of exactly what makes his ideas different and notable compared to other ideas.
I thought I had watchlisted this page, maybe I was just busy when all these edits are made. I see [[User:Overagainst]] contributed a lot here and I was not paying attention before it was all completed. Well I intend to change it up but I'm posting here first in case anyone has anything to say. [[User: Kbog|'''<font color="black">K</font>''']].[[User talk:Kbog|'''<font color="blue">Bog</font>''']] 05:50, 13 February 2018 (UTC)
::If you want to expand the philosophy section and have it solely on Bostrom's analytic philosophy background and ideas, I see no problem. However, Bostrom is notable for his Superintelligence book and existential risk institute. Any article on him that did not give that a lot of weight and explaination would be badly flawed, and I don't think it should go back to being like that. Here is what you had
::[https://en.wikipedia.org/w/index.php?title=Nick_Bostrom&oldid=770756593]'''In his 2014 book Superintelligence: Paths, Dangers, Strategies, Bostrom reasoned that with "the creation of a superintelligent being represents a possible means to the extinction of mankind", and "there are actions that can be taken to reduce this risk," such as "the creation of a ‘friendly’ superintelligent being."[20] In January 2015, Bostrom joined Stephen Hawking among others in signing the Future of Life Institute's open letter warning of the potential dangers of AI.[21] The signatories "...believe that research on how to make AI systems robust and beneficial is both important and timely, and that concrete research should be pursued today."[22]'''
::You left it completely vague as to why Bostom thinks there is a risk and the massive difficulties he sees in creating such a being. And you did not tell the reader that Bostroms book notes that any number of possible aims an AI could have might converge on the single instrumental one of exterminating humanity. The takeover scenario is a distillation of Bostoms's book, in which repeatedly raises the specter of an AI defeating various attempts to hardwire program it to be friendly. Throughout the book Bostrom is pointing out flaws in Yudkowsky's ideas on control. Those are the technical parts of the book that I did not think i could do justice to. Perhaps you can. However the takover scenario is Bostroms most notable argument and without it you are misleading readers about what he thinks the risk is. I suggest you write a philosophy section (long as you like) and put it here for discussion. Same with the Superintelligence section. [[User:Overagainst|Overagainst]] ([[User talk:Overagainst|talk]]) 20:15, 13 February 2018 (UTC)
:::[[WP:SUMMARY]] doesn't prohibit have a large summary section that duplicates the content of [[Existential risk from artificial general intelligence]], but IMHO given finite editor resources and given the presumably low reader traffic to this page, the simplest thing would be to have the summary go back to being just a paragraph as the reader can link through to the main article if they want to learn more. [[User:Rolf h nelson|Rolf H Nelson]] ([[User talk:Rolf h nelson|talk]]) 06:17, 14 February 2018 (UTC)
::::You have done a pretty good job at [[Existential risk from artificial general intelligence]] but the article lacks some things this one has, and you seem to want it to be the go to page for Bostom. Yet there is no link to Bostom in the lede. Now you want to chop this article down. But this one has things that [[Existential risk from artificial general intelligence]] lacks. You came here and cut out mention of a point that Bostom has repeatedly made in interviews, namely the Fermi paradox. Something he has also done in a book Global Catastrophic Risks Nick Bostrom, Milan M. Cirkovic - 2011 which has several pages on Fermi's paradox. So I am afraid it '''is''' one of his main arguments as far as he is concerned. Another thing Bostrom mentions in his book and I put here but isn't on your edit of the main AI risk page is John Von Neumann and Bertrand Russell advocating during the 40s that the US threaten or use its nuke monopoly to eliminate any possible future nuclear threat from the USSR. Kindness or manevolence doesn't really come into it, and in my opinion the basic argument is not "If superintelligent AI is possible, and if it is possible for a superintelligence's goals to conflict with basic human values, then AI poses a risk of human extinction." at all. Obviously John Von Neumann and Bertrand Russell did not oppose human values, they just took what they saw as a realistic view of the situation and tried to eliminate a threat to the entity they were part of. Bostrom's book seems to be making the point that a super-AI acting absolutely rationally might well be led by logic to try to eliminate the potential threat that humans represent, and it will be able to surreptitiously manufacture nanotech weapons to do it. In contrast your AI risk article is not explicit about the threat of a strategising super AI deliberate extermination of humanity scenario at all, and you don't say anything about the means. So IMO the Bostrom page has things the supposedly main article on the subject lacks despite its very great length.[[User:Overagainst|Overagainst]] ([[User talk:Overagainst|talk]]) 15:45, 14 February 2018 (UTC)
:::::Bostrom's elaboration of Robin Hanson's [[Great Filter]] can certainly go in this article if it's well-sourced from reliable secondary sources, as can the nuke monopoly point. If it's only from primary sources about Bostrom's beliefs such as the book or from interviews, we should be more cautious; if nobody besides Bostrom has found a point interesting enough to talk about, then it may be that our readers won't find it interesting or insightful either. Re existential risk from artificial intelligence, The Great Filter only makes one appearance in one footnote in Chapter 13, so I don't think it's a main part of Bostrom's AI arguments nowadays.
:::::Proceeding to "eliminate the potential threat that humans represent" against the AI would seem to me be a conflict with basic human values, but you and other editors should always feel free to edit the existential risk from AI page since if you consider it unclear I'm sure the average reader is probably utterly baffled. [[User:Rolf h nelson|Rolf H Nelson]] ([[User talk:Rolf h nelson|talk]]) 04:56, 16 February 2018 (UTC)
::::::I disagree, on ''his'' page if Bostrom has said something in a non-self-published book, we most certainly do ''not'' need reliable secondary sources that have cited his book as a ref; Bostrom's book is sufficient as a ref for him saying what he said in it. It has to be made very clear that it is Bostrom's opinion and similarly or possibly more qualified people disagree with him, and maybe that needs emphasizing more. In his latest book, Daniel Dennett (who seems to have quite a bit of knowledge of the AI milieu) mentions Bostom by name in the text (not just footnotes) and Dennett's last chapter is bout AI and the final words in the book are about how it is in humanity's hands to control the development of AI, and prevent a strong AI takeover. Dennett says that strong AI/incipient super intelligence is not worth worrying about as it could not be here for at least 50 years, although that is only 40 years off of Bostrom's lowest estimate.[[User:Overagainst|Overagainst]] ([[User talk:Overagainst|talk]]) 23:14, 16 February 2018 (UTC)
:::::::The problem is we have a dense 350 page book about superintelligence, so if an argument inside the book (a) only occupies one footnote, and (b) that specific argument is ignored by the rest of the world, it probably isn't going to be interesting to our readers either. Otherwise we would end up with hundreds of pages. It's better when Wikipedia articles are based mainly on reliable secondary sources. [[User:Rolf h nelson|Rolf H Nelson]] ([[User talk:Rolf h nelson|talk]]) 20:29, 17 February 2018 (UTC)
::::::Not unclear to an educated person, but the ERfAI page is maybe a little dry, especially the lede.
:::::::You mean ''[[From Bacteria to Bach and Back]]'', yes? [[User:Martinevans123|Martinevans123]] ([[User talk:Martinevans123|talk]]) 23:55, 16 February 2018 (UTC)
::::::::[[User:Martinevans123|Martinevans123]], Yes Bacteria to Bach Hisearlier books used the example of weak AI to teach Darwinism. Dennett admits he is now more "tentative" about strong AI being unfeasible in the forseeable future, but he still thinks it would cost too much and not give us anything we need. In Bacteria to Bach Dennett is equivicating on what Bostrom is ultimately worried about. Dennett only scoffs at the he prospect of super-AI as overlords, while Bostrom is very definitely predicting super-AI as the possible exterminator of humanity. Moreover Dennett thinks the film ''Ex Machina'' is about Turing test type moral problems as in the earlier ''Her'', but take away the female form and gamine appeal of the AI in ''Ex Machina'', and it seems much more about a manipulative AI breakout and treacherous turn. Again, Dennett ends the book with a para about how if the future follows the trajectory of the past, AI will never be independent of human control.[[User:Overagainst|Overagainst]] ([[User talk:Overagainst|talk]]) 20:20, 18 February 2018 (UTC)
::::::::[[User:Rolf h nelson|Rolf H Nelson]], It is not a footnote it is several pages, not in Superintelligence, but in another book (Global Catastrophic Risks Nick Bostrom, Milan M. Cirkovic - 2011) that has several pages on Fermi's paradox. So I am afraid it '''is''' one of his main arguments as far as he is concerned. And Fermi's paradox is also mentioned in [https://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom Khatchadourian]'s New Yorker article, which Dennett cites in his latest book as an example of "alarming" predictions about the future course of AI. Bostrom's Fermi paradox/ great Filter idea has had enough notice taken of it to be included here as one of his arguments I think. It has to be made clear that it is just his opinion of course>[[User:Overagainst|Overagainst]] ([[User talk:Overagainst|talk]]) 20:42, 18 February 2018 (UTC)
:::::::::You said that the article on xRisk from AI doesn't mention the Great Filter/Fermi paradox, but the Fermi paradox is a minor xrisk argument for things like gray goo but not for most AI takeover, since the paperclip maker would carry on colonizing the universe. Bostrom brings it up in the context of xrisk as a whole (or in the context of anthropic reasoning), rather than specifically superintelligence xrisk, and so does the article. Feel free to expand the mention under existential risk and/or mention it in the lead if you want to, but I don't think it belongs under superintelligence.[[User:Rolf h nelson|Rolf H Nelson]] ([[User talk:Rolf h nelson|talk]]) 04:50, 22 February 2018 (UTC)
:::Whatever the article used to look like is kind of irrelevant. I never did core writing for this whole article, I just contributed to it. And if the ERfAI article is missing something then we should just go improve that, not try to patch up its omissions in a different article. [[User: Kbog|'''<font color="black">K</font>''']].[[User talk:Kbog|'''<font color="blue">Bog</font>''']] 06:10, 26 February 2018 (UTC)
== External links modified (February 2018) ==
Hello fellow Wikipedians,
I have just modified one external link on [[Nick Bostrom]]. Please take a moment to review [[special:diff/826338798|my edit]]. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit [[User:Cyberpower678/FaQs#InternetArchiveBot|this simple FaQ]] for additional information. I made the following changes:
*Added archive https://web.archive.org/web/20141021111122/http://www.foreignpolicy.com/articles/2009/11/30/the_fp_top_100_global_thinkers?page=0,30 to https://foreignpolicy.com/articles/2009/11/30/the_fp_top_100_global_thinkers?page=0,30
When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.
{{sourcecheck|checked=false|needhelp=}}
Cheers.—[[User:InternetArchiveBot|'''<span style="color:darkgrey;font-family:monospace">InternetArchiveBot</span>''']] <span style="color:green;font-family:Rockwell">([[User talk:InternetArchiveBot|Report bug]])</span> 15:55, 18 February 2018 (UTC)
== Citation for analytic philosophy ==
{{ping|Kbog}} There are plenty of primary sources associating Bostrom with the analytic tradition. For example, his [https://nickbostrom.com homepage] and [https://nickbostrom.com/cv.pdf CV] mention it, and his old homepage was even [http://www.analytic.org analytic.org: "Nick Bostrom's thinking in analytic philosophy"]! As for secondary sources: [https://books.google.com/books?hl=en&lr=&id=PejV2ViHcXIC&oi=fnd&pg=PA4&dq=nick+bostrom+analytic+philosopher&ots=TyW3fNOjdC&sig=kEqVvvuK13CQW_VlEf3m51uClew#v=onepage&q=analytic&f=false], [https://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom], [https://books.google.com/books?id=cLK8CgAAQBAJ&pg=PT59&lpg=PT59&dq=nick+bostrom+analytic+philosopher&source=bl&ots=NnGEdsCfED&sig=1NjZl2zv75U8nCGJzhu3jX8A70A&hl=en&sa=X&ved=0ahUKEwiP5avwydHZAhUQ3GMKHQUKC784ChDoAQhGMAc#v=onepage&q=analytic&f=false]. [[User:GojiBarry|GojiBarry]] ([[User talk:GojiBarry|talk]]) 02:06, 4 March 2018 (UTC)
== "I hate those bloody niggers" ==
Hello fellow Wikipedians,
I believe it is important that this page clearly articulates the full extent of Nick Bostrom's bigoted and racist views, which he himself took the pains to highlight publicly, due to how incredibly racist and anti-Black they were. Specifically, my contribution that fully quoted Nick Bostrom that he "[hates] those bloody niggers" was deleted by Vaco98 (https://en.wikipedia.org/wiki/User:Vaco98), who appears himself to be a white male (perhaps he agrees with Nick Bostrom's views on Black persons or thinks it's OK to protect racist white males in positions of power because "he's done other good things"?), on the grounds that this edit is "not constructive." I think this demonstrates clear and biased circling of wagons around a privileged white male and is completely inappropriate for a public encyclopedia that seeks to be fair and balanced. The only appropriate measure in such an instance is to fully quote the relevant material, which is itself the only reason this e-mail is considered a notable event. The full quote, to be clear, is "I hate those bloody niggers!!!!" The four exclamation marks are Nick Bostrom's choice, not mine.
While it is clear that Nick Bostrom did not intend to plainly state that he "[hates] those bloody niggers," he clearly did not find it problematic to write out such an offensive statement, even as an example, and he paired such an offensive statement with his very clearly articulated view that Black people are "more stupid" than white people. The intended meaning and unequivocally racist sentiments in this context are clear to any decent person and should not be whitewashed to protect Nick Bostrom's reputation, which is not the job of Wikipedia or anyone else. Anybody who is interested in learning more about Nick Bostrom should be free to read and understand why this e-mail in particular was so noteworthy that Nick Bostrom himself took pains to "pre-emptively" surface it and why its contents were covered in national (if not international) news outlets. The reason is, plainly, because Nick Bostrom wrote "I hate those bloody niggers!!!!" and felt no qualms about doing so. This should not be minimized or whitewashed, it should be openly and plainly stated for everyone to know what his views on acceptable conduct were/are. ~~~~' |