This event is jointly organized by the Singapore Futurists & Effective Altruism (EA) Singapore.
What would the future of AI development be like? Is it going to turn out to be net beneficial to humanity? Or are there risks in AI development or deployment that could grow to be catastrophic in the future? How can we affect the trajectory to get the benefits while mitigating the risks?
As the optimization power and range of actions of autonomous AI agents grow over the years, researchers can foresee specific new challenges in the future of AI. Would it result in the way that designers, customers, and society would like it to be? What can be done to make that more likely?
Richard Mallah, Director of AI Projects at the Future of Life Institute, is here to join us in Singapore to speak on this topic. He will share about the types of emergent issues, technical research threads aimed at preventing or mitigating these issues (e.g. increasing contextual awareness, agent foundations, avoiding reward-gaming, safe exploration, establishing scalable control, computational deference, statistical-behavioral trust establishment, and many others), and the ethical, economic, and societal challenges that these systems will pose and what can be done about those.
Come join us for an evening of thoughtful discussion on AI development, safety and risk issues, for the future.
About the speaker:
Richard Mallah is Director of AI Projects at the Future of Life Institute, an EA NGO which aims to keep AI robust and beneficial to mankind. Richard is an advisor to multipleAI-oriented SaaS firms and NGOs where he advises on AI, machine learning, knowledge management, startup life, and sustainability.
About Future of Life Institute (FLI):
FLI's mission is to catalyze and support research and initiatives for safeguarding life and developing optimistic visions of the future, including positive ways for humanity to steer its own course considering new technologies and challenges. FLI has significant initiatives within AI safety.