misk@sopuli.xyz to Technology@lemmy.worldEnglish · 2 年前New report illuminates why OpenAI board said Altman “was not consistently candid”arstechnica.comexternal-linkmessage-square53fedilinkarrow-up1265arrow-down112
arrow-up1253arrow-down1external-linkNew report illuminates why OpenAI board said Altman “was not consistently candid”arstechnica.commisk@sopuli.xyz to Technology@lemmy.worldEnglish · 2 年前message-square53fedilink
minus-squareNounsAndWords@lemmy.worldlinkfedilinkEnglisharrow-up13·2 年前Now what would the company do if the AI model started putting safety above profit (i.e. refusing to lie to profit the user (aka reducing market value))? How fucked are we if they create an AGI that puts profit above safety?
minus-squareHopeOfTheGunblade@kbin.sociallinkfedilinkarrow-up5arrow-down1·2 年前Entirely. We all die. The light cone is turned into the maximum amount of “profit” possible. This is still better than a torment maximizer, which may come as some comfort to the tiny dollar bills made of the atoms that used to be you.
minus-squaremisanthropy@lemm.eelinkfedilinkEnglisharrow-up2·2 年前You get paperclip maximizer https://terbium.io/2020/05/paperclip-maximizer/
Now what would the company do if the AI model started putting safety above profit (i.e. refusing to lie to profit the user (aka reducing market value))? How fucked are we if they create an AGI that puts profit above safety?
Entirely. We all die. The light cone is turned into the maximum amount of “profit” possible.
This is still better than a torment maximizer, which may come as some comfort to the tiny dollar bills made of the atoms that used to be you.
You get paperclip maximizer
https://terbium.io/2020/05/paperclip-maximizer/