Allineamento dell'Intelligenza Artificiale: differenze tra le versioni
(Creata pagina con " https://intelligence.org/get-involved/ https://www.alignmentforum.org/posts/PqMT9zGrNsGJNfiFR/alignment-research-field-guide https://www.lesswrong.com/posts/P3Yt66Wh5g7SbkKuT/how-to-get-into-independent-research-on-alignment-agency") |
Nessun oggetto della modifica |
||
(3 versioni intermedie di uno stesso utente non sono mostrate) | |||
Riga 6: | Riga 6: | ||
https://www.lesswrong.com/posts/P3Yt66Wh5g7SbkKuT/how-to-get-into-independent-research-on-alignment-agency | https://www.lesswrong.com/posts/P3Yt66Wh5g7SbkKuT/how-to-get-into-independent-research-on-alignment-agency | ||
https://equilibriabook.com/toc/ | |||
https://arxiv.org/abs/1202.6153 | |||
https://www.lesswrong.com/s/Rm6oQRJJmhGCcLvxh | |||
https://www.lesswrong.com/s/4dHMdK5TLN6xcqtyc | |||
To understand (some) existing approaches and jargon, I’d recommend at least skimming these sequences/posts, and diving deeper into whichever most resemble the directions you want to pursue: | |||
* Embedded Agency | |||
* Value Learning | |||
* [https://www.lesswrong.com/posts/fRsjBseRuvRhMPPE5/an-overview-of-11-proposals-for-building-safe-advanced-ai 11 Proposals For Building Safe Advanced AI] | |||
* Risks From Learned Optimization |
Versione attuale delle 21:03, 29 dic 2024
https://intelligence.org/get-involved/
https://www.alignmentforum.org/posts/PqMT9zGrNsGJNfiFR/alignment-research-field-guide
https://equilibriabook.com/toc/
https://arxiv.org/abs/1202.6153
https://www.lesswrong.com/s/Rm6oQRJJmhGCcLvxh
https://www.lesswrong.com/s/4dHMdK5TLN6xcqtyc
To understand (some) existing approaches and jargon, I’d recommend at least skimming these sequences/posts, and diving deeper into whichever most resemble the directions you want to pursue:
- Embedded Agency
- Value Learning
- 11 Proposals For Building Safe Advanced AI
- Risks From Learned Optimization