Congratulations!

[Valid RSS] This is a valid RSS feed.

Recommendations

This feed is valid, but interoperability with the widest range of feed readers could be improved by implementing the following recommendations.

Source: http://lesswrong.com/comments/.rss

  1. <?xml version="1.0" encoding="UTF-8"?>
  2. <rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:media="http://search.yahoo.com/mrss/">
  3. <channel>
  4. <title>
  5. Comments - Less Wrong
  6. </title> <link>http://lesswrong.com/</link>
  7. <description></description>
  8. <item>
  9. <title>TobyBartels on Sorting Pebbles Into Correct Heaps</title>
  10. <link>http://lesswrong.com/lw/sy/sorting_pebbles_into_correct_heaps/dgh5</link>
  11. <guid isPermaLink="true">http://lesswrong.com/lw/sy/sorting_pebbles_into_correct_heaps/dgh5</guid>
  12. <dc:date>2016-10-19T11:46:28.511688+00:00</dc:date>
  13. <description>
  14. &lt;div class=&quot;md&quot;&gt;&lt;p&gt;Sure, that explains why the story was written with this flaw, but it doesn't remove the flaw. But I don't have a better suggestion.&lt;/p&gt;&lt;/div&gt;
  15. </description>
  16. </item>
  17. <item>
  18. <title>PhilGoetz on MIRI's 2016 Fundraiser</title>
  19. <link>http://lesswrong.com/lw/ny1/miris_2016_fundraiser/dggx</link>
  20. <guid isPermaLink="true">http://lesswrong.com/lw/ny1/miris_2016_fundraiser/dggx</guid>
  21. <dc:date>2016-10-19T02:51:06.119839+00:00</dc:date>
  22. <description>
  23. &lt;div class=&quot;md&quot;&gt;&lt;p&gt;Please use a page break when you post an article, so we can easily scroll past it and see the previous articles.&lt;/p&gt;&lt;/div&gt;
  24. </description>
  25. </item>
  26. <item>
  27. <title>DanArmak on Open thread, October 2011</title>
  28. <link>http://lesswrong.com/lw/7wu/open_thread_october_2011/dggg</link>
  29. <guid isPermaLink="true">http://lesswrong.com/lw/7wu/open_thread_october_2011/dggg</guid>
  30. <dc:date>2016-10-18T19:39:39.934825+00:00</dc:date>
  31. <description>
  32. &lt;div class=&quot;md&quot;&gt;&lt;p&gt;Thank you, your point is well taken.&lt;/p&gt;&lt;/div&gt;
  33. </description>
  34. </item>
  35. <item>
  36. <title>TheAncientGeek on Open thread, October 2011</title>
  37. <link>http://lesswrong.com/lw/7wu/open_thread_october_2011/dgg5</link>
  38. <guid isPermaLink="true">http://lesswrong.com/lw/7wu/open_thread_october_2011/dgg5</guid>
  39. <dc:date>2016-10-18T13:04:49.990342+00:00</dc:date>
  40. <description>
  41. &lt;div class=&quot;md&quot;&gt;&lt;p&gt;I am not taking charity to be a central example of ethics.&lt;/p&gt;
  42. &lt;p&gt;Charity, societal improvement,etc are not &lt;em&gt;centrally&lt;/em&gt; ethical, because the dimension of obligation is missing. It is obligatory to refrain from murder, but supererogatory to give to charity. Charity is not completely divorced
  43. from ethics, because gaining better outcomes is the obvious flipside
  44. of avoiding worse outcomes, but it does not have every component that
  45. which is centrally ethical.&lt;/p&gt;
  46. &lt;p&gt;Not all value is morally relevant. Some preferences can be satisfied without impacting anybody else, preferences for flavours of ice cream being the classic example, and these are morally irrelevant. On the other had, my preference for loud music is likely to impinge on my neighbour's preference for a good nights sleep: those preferences have a potential for conflict.&lt;/p&gt;
  47. &lt;p&gt;Charity and altrusim are part of ethics, but not central to ethics. A peaceful and prosperous society is in a position to consider how best to allocate its spare resources (and utiliariansim is helpful here, without being a full theory of ethics), but peace and prosperity are themselves the outcome a functioning ethics, not things that can be taken for granted. Someone who treats charity as the outstanding issue in ethics is, as it were, looking at
  48. the visible 10% of the iceberg while ignoring the 90% that supports it.&lt;/p&gt;
  49. &lt;blockquote&gt;
  50. &lt;p&gt;If you mean conflict between individuals' own values,&lt;/p&gt;
  51. &lt;/blockquote&gt;
  52. &lt;p&gt;I mean &lt;em&gt;destructive&lt;/em&gt; conflict.&lt;/p&gt;
  53. &lt;p&gt;Consider two stone age tribes. When a hunter of tribe A returns with a
  54. deer, everyone falls on it, trying to grab as much as possible, and end up fighting and killing each other. When the same thing happens in tribe b, they apportion the kill in an orderly fashion according to
  55. a predefined rule. All other things being equal, tribe B will do better
  56. than tribe A: they are in possession of a useful piece of social technology.&lt;/p&gt;&lt;/div&gt;
  57. </description>
  58. </item>
  59. <item>
  60. <title>TonyPaukar on The Landmark Forum — a rationalist's first impression</title>
  61. <link>http://lesswrong.com/lw/5zh/the_landmark_forum_a_rationalists_first_impression/dgg2</link>
  62. <guid isPermaLink="true">http://lesswrong.com/lw/5zh/the_landmark_forum_a_rationalists_first_impression/dgg2</guid>
  63. <dc:date>2016-10-18T10:20:10.308636+00:00</dc:date>
  64. <description>
  65. &lt;div class=&quot;md&quot;&gt;&lt;p&gt;There is an introductory night wherein they give a tiny promo of the entire course. The participants and the volunteers convinced me to do the Forum and guaranteed 10% satisfaction. On the first and the second day I was unsure of the ability of this course, I assumed it was like any other program which resulted in nothing. After consulting one of the volunteers I realised that working on the assignments was essential which I wasn't doing with integrity. After taking all of the assignments seriously I did see instant results. The forum made a difference in my behaviour towards all situations in life. I used to over-think due to which I lost my confidence and built stage-fear. I lacked self-expression. Now I am confident and out-going. I recommend this course to the ones who want self-improvement.&lt;/p&gt;&lt;/div&gt;
  66. </description>
  67. </item>
  68. <item>
  69. <title>TheAncientGeek on What Is Signaling, Really?</title>
  70. <link>http://lesswrong.com/lw/did/what_is_signaling_really/dgg1</link>
  71. <guid isPermaLink="true">http://lesswrong.com/lw/did/what_is_signaling_really/dgg1</guid>
  72. <dc:date>2016-10-18T08:33:02.435438+00:00</dc:date>
  73. <description>
  74. &lt;div class=&quot;md&quot;&gt;&lt;p&gt;The rule as usually understood is that fewer relates to discrete quantities, fewer apples, and less to continuous quantities, less milk. It's possibly rather artificial, and noticeably lacking a counterpart in &quot;more&quot;.&lt;/p&gt;&lt;/div&gt;
  75. </description>
  76. </item>
  77. <item>
  78. <title>donjoe on Terminal Values and Instrumental Values</title>
  79. <link>http://lesswrong.com/lw/l4/terminal_values_and_instrumental_values/dgfu</link>
  80. <guid isPermaLink="true">http://lesswrong.com/lw/l4/terminal_values_and_instrumental_values/dgfu</guid>
  81. <dc:date>2016-10-17T23:58:36.105180+00:00</dc:date>
  82. <description>
  83. &lt;div class=&quot;md&quot;&gt;&lt;p&gt;I'm noticing this very late, and I'm going to be off-topic, but I still have to stop to note that there's no such thing as &quot;IP&quot;, not in actual laws (unless they've been infected by this term very recently and I just haven't found out about it). It's a bogus name lumping together things that the law does not lump together at all, a term invented purely for use in corporate propaganda, nothing more.
  84. &lt;a href=&quot;https://www.gnu.org/philosophy/not-ipr.en.html&quot; rel=&quot;nofollow&quot;&gt;https://www.gnu.org/philosophy/not-ipr.en.html&lt;/a&gt;&lt;/p&gt;&lt;/div&gt;
  85. </description>
  86. </item>
  87. <item>
  88. <title>PetjaY on What Is Signaling, Really?</title>
  89. <link>http://lesswrong.com/lw/did/what_is_signaling_really/dgfk</link>
  90. <guid isPermaLink="true">http://lesswrong.com/lw/did/what_is_signaling_really/dgfk</guid>
  91. <dc:date>2016-10-17T19:58:16.700155+00:00</dc:date>
  92. <description>
  93. &lt;div class=&quot;md&quot;&gt;&lt;p&gt;There´s a difference there though. Less &amp;amp; fewer mean same thing, so writer using those abnormally isn´t really an error, it´s just something people don´t usually do. They´re , there, their mean different things so correcting those really makes the world better.&lt;/p&gt;&lt;/div&gt;
  94. </description>
  95. </item>
  96. <item>
  97. <title>DanArmak on Open thread, October 2011</title>
  98. <link>http://lesswrong.com/lw/7wu/open_thread_october_2011/dgfi</link>
  99. <guid isPermaLink="true">http://lesswrong.com/lw/7wu/open_thread_october_2011/dgfi</guid>
  100. <dc:date>2016-10-17T18:33:21.681343+00:00</dc:date>
  101. <description>
  102. &lt;div class=&quot;md&quot;&gt;&lt;p&gt;I'm not sure what you mean by conflict between individuals.&lt;/p&gt;
  103. &lt;p&gt;If you mean actual conflict like arguing or fighting, then choosing between donating to save five hungry people in Africa vs. two hungry people in South America isn't a moral choice if nobody can observe your online purchases (let alone counterfactual ones) and develop a conflict with you. Someone who secretly invents a way cure for cancer doesn't have moral reasons to cure others because they don't know he can and are not in conflict with him.&lt;/p&gt;
  104. &lt;p&gt;If you mean conflict between individuals' own values, where each hungry person wants you to save them, then every single decision is moral because there are always people who'd prefer you give them your money instead of doing anything else with it, and there are probably people who want you dead as a member of a nationality, ethnicity or religion. Apart from the unpleasant implications of this variant of utilitarianism, you didn't want to label all decisions as moral.&lt;/p&gt;&lt;/div&gt;
  105. </description>
  106. </item>
  107. <item>
  108. <title>TheAncientGeek on Open thread, October 2011</title>
  109. <link>http://lesswrong.com/lw/7wu/open_thread_october_2011/dgf5</link>
  110. <guid isPermaLink="true">http://lesswrong.com/lw/7wu/open_thread_october_2011/dgf5</guid>
  111. <dc:date>2016-10-17T13:49:08.419383+00:00</dc:date>
  112. <description>
  113. &lt;div class=&quot;md&quot;&gt;&lt;p&gt;Ingenious. However, I can easily get round it by adding the rider that morality as concerned with conflicts &lt;em&gt;between individuals&lt;/em&gt;. As stated, that is glib, but it can be motivated. Conflicts between individuals, in the absence of rules about how to distribute resources) are destructive, leading to waste of resources. (yes, I can predict the importance of various kinds of &quot;fairness&quot; to morality&quot;). Conflicts &lt;em&gt;within&lt;/em&gt; individuals much less so. Conflicts aren't a problem because they are conflicts, they are a problem because of their possible consequences.&lt;/p&gt;&lt;/div&gt;
  114. </description>
  115. </item>
  116. <item>
  117. <title>Wes_W on Counterfactual Mugging</title>
  118. <link>http://lesswrong.com/lw/3l/counterfactual_mugging/dge4</link>
  119. <guid isPermaLink="true">http://lesswrong.com/lw/3l/counterfactual_mugging/dge4</guid>
  120. <dc:date>2016-10-16T06:49:57.682182+00:00</dc:date>
  121. <description>
  122. &lt;div class=&quot;md&quot;&gt;&lt;p&gt;You're fundamentally failing to address the problem.&lt;/p&gt;
  123. &lt;p&gt;For one, your examples just plain omit the &quot;Omega is a predictor&quot; part, which is key to the situation. Since Omega is a predictor, there is no distinction between making the decision ahead of time or not.&lt;/p&gt;
  124. &lt;p&gt;For another, unless you can prove that your proposed alternative doesn't have pathologies just as bad as the Counterfactual Mugging, you're &lt;em&gt;at best&lt;/em&gt; back to square one.&lt;/p&gt;
  125. &lt;p&gt;It's very easy to say &quot;look, just don't do the pathological thing&quot;. It's very hard to formalize that into an actual decision theory, without creating new pathologies. I feel obnoxious to keep repeating this, but &lt;em&gt;that is the entire problem in the first place&lt;/em&gt;.&lt;/p&gt;&lt;/div&gt;
  126. </description>
  127. </item>
  128. <item>
  129. <title>TheAncientGeek on Quantum Non-Realism</title>
  130. <link>http://lesswrong.com/lw/q5/quantum_nonrealism/dgdw</link>
  131. <guid isPermaLink="true">http://lesswrong.com/lw/q5/quantum_nonrealism/dgdw</guid>
  132. <dc:date>2016-10-15T22:34:32.092776+00:00</dc:date>
  133. <description>
  134. &lt;div class=&quot;md&quot;&gt;&lt;p&gt;There are a number of kinds and grades of non-realism.&lt;/p&gt;
  135. &lt;blockquote&gt;
  136. &lt;p&gt;Well, obviously, once you know you didn't get a measurement, its probability becomes zero&lt;/p&gt;
  137. &lt;p&gt;has got to be one of the most embarrassing wrong turns in the history of science.&lt;/p&gt;
  138. &lt;p&gt;If you take all this literally, it becomes the consciousness-causes-collapse interpretation of quantum mechanics. These days, just about nobody will confess to actually believing in the consciousness-causes-collapse interpretation of quantum mechanics—&lt;/p&gt;
  139. &lt;/blockquote&gt;
  140. &lt;p&gt;It's not an inevitable slide. An interpretation that is anti-realist about collapse, will not attribute the cause of collapse to consciousness, since it does not acknowledge the reality of collapse in the first place. It nonetheless has to explain the process of disregarding unobserved possibilities. ...which it can do by saying that the observer is updating their subjective map on the basis of fresh information. Selective anti-realism about collapse is a consistent position. Sweeping anti-realism,might not be, but that is another issue. The subjective interpretation of collapse is posited on information becoming available to an observer from an external world, so it is not sweeping anti realism.&lt;/p&gt;&lt;/div&gt;
  141. </description>
  142. </item>
  143. <item>
  144. <title>siIver on The curse of identity</title>
  145. <link>http://lesswrong.com/lw/8gv/the_curse_of_identity/dgdg</link>
  146. <guid isPermaLink="true">http://lesswrong.com/lw/8gv/the_curse_of_identity/dgdg</guid>
  147. <dc:date>2016-10-15T16:18:51.463089+00:00</dc:date>
  148. <description>
  149. &lt;div class=&quot;md&quot;&gt;&lt;p&gt;Well, fuck.&lt;/p&gt;&lt;/div&gt;
  150. </description>
  151. </item>
  152. <item>
  153. <title>siIver on Levels of Action</title>
  154. <link>http://lesswrong.com/lw/58g/levels_of_action/dgd3</link>
  155. <guid isPermaLink="true">http://lesswrong.com/lw/58g/levels_of_action/dgd3</guid>
  156. <dc:date>2016-10-15T07:49:57.130306+00:00</dc:date>
  157. <description>
  158. &lt;div class=&quot;md&quot;&gt;&lt;p&gt;As is, every level is only useful insofar as it helps with lower levels. But Level 1 still isn't the ultimate goal. You don't live to do the dishes, and not – at least not necessarily – to work. I think this model should be extended by Level 0 actions, which are things that directly cause happiness (or, alternatively, whatever else your ultimate goal is in life). Level 1 is, I think solely, useful to provide you (or others) with more opportunities to do Level 0. Level 2 then is useful to help you with Level 1, etc, so everything stays the same. Your thoughts about how people do too few / too many actions on a certain level is also directly applicable to Level 0.&lt;/p&gt;
  159. &lt;p&gt;What is different is that all Level n actions now also have a Level 0 component, but I think that's useful to have since it corresponds to a real thing in the world that has previously not been covered. As an example, if you can do a Level 2 &amp;amp; 0 action (such as reading up on computer science which you enjoy doing) instead of a pure Level 0 action, then that should always be a good idea, even if there is a risk of low connectivity back to Levels 1 and 0.&lt;/p&gt;&lt;/div&gt;
  160. </description>
  161. </item>
  162. <item>
  163. <title>Gunnar_Zarncke on MIRI's 2016 Fundraiser</title>
  164. <link>http://lesswrong.com/lw/ny1/miris_2016_fundraiser/dgd0</link>
  165. <guid isPermaLink="true">http://lesswrong.com/lw/ny1/miris_2016_fundraiser/dgd0</guid>
  166. <dc:date>2016-10-14T22:56:15.162334+00:00</dc:date>
  167. <description>
  168. &lt;div class=&quot;md&quot;&gt;&lt;p&gt;I'm not sure this has the best visibility here in Main. I just noted it right now because I haven't looked in Main for ages. And it wasn't featured in discussions, or was it?&lt;/p&gt;&lt;/div&gt;
  169. </description>
  170. </item>
  171. <item>
  172. <title>DanArmak on Open thread, October 2011</title>
  173. <link>http://lesswrong.com/lw/7wu/open_thread_october_2011/dgcy</link>
  174. <guid isPermaLink="true">http://lesswrong.com/lw/7wu/open_thread_october_2011/dgcy</guid>
  175. <dc:date>2016-10-14T18:51:54.792230+00:00</dc:date>
  176. <description>
  177. &lt;div class=&quot;md&quot;&gt;&lt;p&gt;Some people think that any value, if it is the only value, naturally tries to consume all available resources. Even if you explicitly make a satisficing, non-maximizing value (e.g. &quot;make 1000 paperclips&quot;, not just &quot;make paperclips&quot;), a rational agent pursuing that value may consume infinite resources making more paperclips just in case it's somehow wrong about already having made 1000 of them, or in case some of the ones it has made are destroyed.&lt;/p&gt;
  178. &lt;p&gt;On this view, all values need to be able to trade off one another (which implies a common quantitative utility measurement). Even if it seems obvious that the chance you're wrong about having made 1000 paperclips is very small, and you shouldn't invest more resources in that instead of working on your next value, this needs to be explicit and quantified.&lt;/p&gt;
  179. &lt;p&gt;In this case, since all values inherently conflict with one another, all decisions (between actions that would serve different values) are moral decisions in your terms. I think this is a good intuition pump for why some people think all actions and all decisions are necessarily moral.&lt;/p&gt;&lt;/div&gt;
  180. </description>
  181. </item>
  182. <item>
  183. <title>hairyfigment on Open thread, October 2011</title>
  184. <link>http://lesswrong.com/lw/7wu/open_thread_october_2011/dgcp</link>
  185. <guid isPermaLink="true">http://lesswrong.com/lw/7wu/open_thread_october_2011/dgcp</guid>
  186. <dc:date>2016-10-14T10:37:41.105037+00:00</dc:date>
  187. <description>
  188. &lt;div class=&quot;md&quot;&gt;&lt;p&gt;Clearly I should have asked about actions rather than people. But the Babyeaters were &lt;em&gt;not&lt;/em&gt; ignorant that they were causing great pain and emotional distress. They may not have known how long it continued, but none of the human characters IIRC suggested this information might change their minds. Because those aliens had a genetic tendency towards non-human preferences, and the (working) society they built strongly reinforced this.&lt;/p&gt;&lt;/div&gt;
  189. </description>
  190. </item>
  191. <item>
  192. <title>CCC on Open thread, October 2011</title>
  193. <link>http://lesswrong.com/lw/7wu/open_thread_october_2011/dgco</link>
  194. <guid isPermaLink="true">http://lesswrong.com/lw/7wu/open_thread_october_2011/dgco</guid>
  195. <dc:date>2016-10-14T10:30:22.010240+00:00</dc:date>
  196. <description>
  197. &lt;div class=&quot;md&quot;&gt;&lt;p&gt;What they did was clearly &lt;em&gt;wrong&lt;/em&gt;... but, at the same time, they did not know it, and that has relevance.&lt;/p&gt;
  198. &lt;p&gt;Consider; you are given a device with a single button. You push the button and a hamburger appears. This is repeatable; every time you push the button, a hamburger appears. To the best of your knowledge, this is the only effect of pushing the button. Pushing the button therefore does not make you an immoral person; pushing the button several times to produce enough hamburgers to feed the hungry would, in fact, be the action of a moral person.&lt;/p&gt;
  199. &lt;p&gt;The above paragraph holds &lt;em&gt;even if&lt;/em&gt; the device also causes lightning to strike a different person in China every time you press the button. (Although, in this case, creating the device was presumably an immoral act).&lt;/p&gt;
  200. &lt;p&gt;So, back to the babyeaters; some of their &lt;em&gt;actions&lt;/em&gt; were immoral, but they themselves were not immoral, due to their ignorance.&lt;/p&gt;&lt;/div&gt;
  201. </description>
  202. </item>
  203. <item>
  204. <title>thrawnca on Counterfactual Mugging</title>
  205. <link>http://lesswrong.com/lw/3l/counterfactual_mugging/dgcb</link>
  206. <guid isPermaLink="true">http://lesswrong.com/lw/3l/counterfactual_mugging/dgcb</guid>
  207. <dc:date>2016-10-14T03:21:25.443140+00:00</dc:date>
  208. <description>
  209. &lt;div class=&quot;md&quot;&gt;&lt;blockquote&gt;
  210. &lt;p&gt;we want a rigorous, formal explanation of exactly how, when, and why you should or should not stick to your precommitment&lt;/p&gt;
  211. &lt;/blockquote&gt;
  212. &lt;p&gt;Well, if we're designing an AI now, then we have the capability to make a binding precommitment, simply by writing code. And we are still in a position where we can hope for the coin to come down heads. So yes, in that privileged position, we should bind the AI to pay up.&lt;/p&gt;
  213. &lt;p&gt;However, to the question as stated, &quot;is the decision to give up $100 when you have no real benefit from it, only counterfactual benefit, an example of winning?&quot; I would still answer, &quot;No, you don't achieve your goals/utility by paying up.&quot; We're specifically told that the coin has already been flipped. Losing $100 has negative utility, and positive utility isn't on the table.&lt;/p&gt;
  214. &lt;p&gt;Alternatively, since it's asking specifically about the decision, I would answer, If you haven't made the decision until after the coin comes down tails, then paying is the wrong decision. Only if you're deciding in advance (when you still hope for heads) can a decision to pay have the best expected value.&lt;/p&gt;
  215. &lt;p&gt;Even if deciding in advance, though, it's still not a guaranteed win, but rather a gamble. So I don't see any inconsistency in saying, on the one hand, &quot;You should make a binding precommitment to pay&quot;, and on the other hand, &quot;If the coin has already come down tails without a precommitment, you shouldn't pay.&quot;&lt;/p&gt;
  216. &lt;p&gt;If there were a lottery where the expected value of a ticket was actually positive, and someone comes to you offering to sell you their ticket (at cost price), then it would make sense in advance to buy it, but if you didn't, and then the winners were announced and that ticket &lt;em&gt;didn't&lt;/em&gt; win, then buying it no longer makes sense.&lt;/p&gt;&lt;/div&gt;
  217. </description>
  218. </item>
  219. <item>
  220. <title>username2 on Open thread, October 2011</title>
  221. <link>http://lesswrong.com/lw/7wu/open_thread_october_2011/dgc7</link>
  222. <guid isPermaLink="true">http://lesswrong.com/lw/7wu/open_thread_october_2011/dgc7</guid>
  223. <dc:date>2016-10-14T00:04:43.780452+00:00</dc:date>
  224. <description>
  225. &lt;div class=&quot;md&quot;&gt;&lt;p&gt;Survey assumed a consequentialist utilitarian moral framework. My moral philosophy is neither, so there was no adequate answer.&lt;/p&gt;&lt;/div&gt;
  226. </description>
  227. </item>
  228. </channel>
  229. </rss>

If you would like to create a banner that links to this page (i.e. this validation result), do the following:

  1. Download the "valid RSS" banner.

  2. Upload the image to your own server. (This step is important. Please do not link directly to the image on this server.)

  3. Add this HTML to your page (change the image src attribute if necessary):

If you would like to create a text link instead, here is the URL you can use:

http://www.feedvalidator.org/check.cgi?url=http%3A//lesswrong.com/comments/.rss

Copyright © 2002-9 Sam Ruby, Mark Pilgrim, Joseph Walton, and Phil Ringnalda