Reinforcement Talking

著者: UCL Artificial Intelligence Society
  • サマリー

  • Welcome to the UCL AI Society Podcast, Reinforcement Talking! Every episode, we aim to demystify artificial intelligence, hosting nuanced discussions with the brightest luminaries in the field. With exciting new talking points every episode, you won't want to miss a minute. We release on Sundays (and our schedule is very good we promise). Tune in and we hope you enjoy the journey! Proudly presented by the UCL Artificial Intelligence Society.
    UCL Artificial Intelligence Society
    続きを読む 一部表示

あらすじ・解説

Welcome to the UCL AI Society Podcast, Reinforcement Talking! Every episode, we aim to demystify artificial intelligence, hosting nuanced discussions with the brightest luminaries in the field. With exciting new talking points every episode, you won't want to miss a minute. We release on Sundays (and our schedule is very good we promise). Tune in and we hope you enjoy the journey! Proudly presented by the UCL Artificial Intelligence Society.
UCL Artificial Intelligence Society
エピソード
  • The Machine Manifesto: How AI is Transforming Politics
    2024/10/20

    Is the rise of AI making politics a more dangerously polarised game? Is it even fair or ethical to allow these technologies to impact media? And will the integrity of future election campaigns be doomed? Myesha Jemison is hosted by Anthony Nkyi for a conversation on the holistic impact of the rising presence of artificial intelligence in the political sphere and on the electorate. They discuss how AI has accentuated political bias, its increased influence in social media and internet communication, how traditional media outlets are coping with misinformation, the problem of information disorder, when malicious AI use crosses the lines of free speech, the overall positives, and how to disarm its negative impacts.

    Myesha Jemison is a PhD student at the Department of History and Philosophy of Science, a Graduate Student Fellow at the Leverhulme Centre for the Future of Intelligence at the Institute for Technology and Humanity and a member of Trinity College, Cambridge. Her PhD research looks at how Cambridge Analytica, including its parent company Strategic Communications Laboratories Group (SCL Group) and its research arm, the Behavioural Dynamics (BDi), built scientific credibility without achieving transparency into their science and methodologies. Her dissertation also investigates how young democracies in Nigeria, Kenya and South Africa built credibility in their electoral systems.


    Articles discussed in this episode:

    • AI Poses Risks to Both Authoritarian and Democratic Politics: https://www.wilsoncenter.org/blog-post/ai-poses-risks-both-authoritarian-and-democratic-politics
    • Rise of the AI psychbots: https://www.politico.com/newsletters/digital-future-daily/2024/01/02/rise-of-the-ai-psychbots-00133487
    • An Ethiopian professor was murdered by a mob. A lawsuit alleges Facebook fueled the violence: https://edition.cnn.com/2022/12/14/tech/ethiopia-murdered-professor-lawsuit-meta-kenya-intl/index.html

    続きを読む 一部表示
    48 分
  • Stopping Killer Robots: Why and How
    2024/03/13
    What are Killer Robots? What are the laws around LAWS (Lethal Autonomous Weapons Systems)? In this episode, we spoke with Rianna Nayee, a Campaigns & Policy Officer at UNA-UK, a group which works on UN reform and specifically LAWS. Organisations Mentioned in the episode: UNA-UK: ⁠⁠⁠⁠⁠⁠www.una.org.uk⁠⁠⁠⁠⁠⁠ UK Campaign to Stop Killer Robots website: ⁠https://ukstopkillerrobots.org.uk/⁠ Articles mentioned in the episode: The Moral and Legal Imperative to Ban Killer Robots: ⁠https://www.hrw.org/report/2018/08/21/heed-call/moral-and-legal-imperative-ban-killer-robots⁠ 2023 UN resolution on AWS: ⁠https://ukstopkillerrobots.org.uk/2023/11/07/uk-campaign-to-stop-killer-robots-welcomes-landmark-un-resolution-on-autonomous-weapons-systems/⁠ UK Parliamentary Champions: ⁠https://una.org.uk/parliamentary-champions-action-autonomous-weapons⁠Monitoring AI weapons development: ⁠https://automatedresearch.org/weapons-systems/⁠ Stop Killer Robots in UK Universities report: ⁠⁠https://una.org.uk/KRUniReport⁠⁠ Taking Action: ⁠https://www.stopkillerrobots.org/take-action/⁠ (If you are a UCL Student) UCL Amnesty International Society: ⁠https://studentsunionucl.org/clubs-societies/amnesty-international-society⁠ Still wanting more? Check out our further resources here: https://tinyurl.com/258s2hkx
    続きを読む 一部表示
    54 分
  • Debate: Is AI an Existential Risk to Humanity?
    2024/03/04

    Welcome to our first episode of Season 2 of Reinforcement Talking, UCL AI Society's Podcast!

    This episode is an audio version of our recent AI Debate. It's difficult to think of a question with higher stakes. Some AI experts like Geoffrey Hinton think that AI should be considered just as risky as pandemics or nuclear war; others, like Melanie Mitchell, see the risks as vanishingly small.

    It was a pleasure to host 4 experts on this topic, including

    • Reuben Adams, a UCL AI PhD student
    • Chris Watkins, Professor in Computer Science at Royal Holloway
    • Jack Stilgoe, a Science Technology Professor at UCL
    • Kenneth Cukier, Deputy Executive Editor at The Economist.

    If you'd prefer to watch the YouTube version, here's the link.⁠

    続きを読む 一部表示
    1 時間 41 分

Reinforcement Talkingに寄せられたリスナーの声

カスタマーレビュー:以下のタブを選択することで、他のサイトのレビューをご覧になれます。