TAN Zhi Xuan

PhD Student, MIT CoCoSci & MIT ProbComp | Based in USA

8 Aug 2020

Cause Areas of Interest in EA:

  • AI safety & technical research

 
 
 

Tell us a little bit about yourself

My name is Zhi Xuan but I also go by Xuan. I am one of the current board members of Effective Altruism Singapore (“EA SG”). I am also a rising 2nd year Computer Science PhD student in the Probabilistic Computing and Computational Cognitive Science groups at the Massachusetts Institute of Technology (“MIT”), where I do research on Artificial Intelligence (“AI”) that focuses on aligning AI with human values.

My journey into Effective Altruism (“EA”) started in 2014 when I encountered the EA student group as an undergraduate at Yale University. I had previously been exposed to EA online, but I never really got into it then. Through Yale EA, I learnt a lot more about the EA movement and its ideas. Despite my initial reluctance, those ideas resonated enough with me that I continued to be involved, and eventually served as one of the co-presidents.

After graduating, I spent a year in Singapore working at the Agency for Science, Technology and Research ("A*STAR"). During this time, I got more involved with EA SG, helping out with the Doing Good Better conference, then starting an AI safety reading group at the suggestion of a fellow EA SG member. Through that group, I got involved in some advocacy efforts around Singaporean AI policy. Now that I have left Singapore again for my studies, I am involved with the EA group at MIT, while continuing to assist EA SG in my capacity as a board member.


How did you first get involved with the effective altruism community?

In my late teens, I became very concerned about inequality and the arbitrariness of privilege. My family is wealthy, though I was not told that growing up, and when I began to realize this for myself, I started asking, "Why do I have all of this wealth and privilege for no good reason?" I felt like I did not deserve it - to me, it was the product of an economic system that randomly and unjustly allocated a lot more resources to some individuals than others. I developed a deep sense of guilt about my class privilege, especially given the sheer amount of inequality in the world, but I did not really know what to do about it - for example, how to redistribute my wealth.

I went to college in the US – another function of class privilege – with these political leanings, if you will. Then, as mentioned, I encountered the EA student group at Yale during an extracurricular fair. The group was running a Giving Game of sorts - they gave me a dollar, asked me to donate it to one of three charities and then requested an explanation for my decision. They told me that my thought process in coming to my decision was interesting, and invited me to attend their meetings. However, I was reluctant because of my leftist political leanings. I was very sceptical of the effectiveness and general approach of charities. I did not feel like charity would address the root causes of inequality – and for the record, I still do not.

In any case, because the EA student group extended an invitation, I decided to check it out and started going for their regular group meetings. I did not fully believe in their approach - it felt too apolitical for me. Nonetheless, I appreciated the fact that the group was trying to get obscenely rich Ivy League students to redistribute their wealth. EA’s focus on global concerns also appealed to me more than the causes that many of my peers were championing at a local level.

Eventually, many EA ideas and practices grew on me: epistemic humility, marginal and counterfactual thinking, discussion norms which promote changing one’s mind in light of new arguments and evidence, as well as the widespread concern for all sentient beings, including non-human animals and future generations. That is why I remain involved with EA. While I still think it lacks a complete picture of how best to improve the world, it has a unique set of perspectives and tools to offer, which I have personally found highly beneficial.

How and why did you get involved in AI safety and policy?

I wanted to be a scientist ever since I was kid, though I did not know what kind of scientist until I was older. Eventually, I became interested in computational neuroscience, because I enjoyed all the science subjects they had taught in high school – biology, chemistry, physics – while also enjoying mathematics. I thought computational neuroscience was the one field where I could explore all those interests, so I decided to pursue it in college. I was also enamoured by the idea of cyborg technology, and whether humans could ever augment their bodies and minds by better understanding the brain. I was a pessimist about human nature, but an optimist about technology - I thought we humans could use technology to save us from ourselves.

Then I read Superintelligence by Nick Bostrom, upon the recommendation of an EA friend. Reading Superintelligence shattered my naïve faith in technology. I went from this optimism about how technology would influence the future of humanity to deep pessimism – so now I am pessimistic about everything. *laughs* While this is no longer my exact position, I became quite convinced by Bostrom’s argument that, if we do not put special effort into aligning AI with human values, the default outcome will be human extinction, or worse. I was also persuaded by his arguments that, of the technologies that will most likely shape the future of intelligent life, AI is likely to progress faster (and hence more dangerously) than human augmentation or whole brain emulation. Since I was already studying computer science then, with the hope of eventually applying those skills to computational neuroscience, it made sense to switch, and focus on AI safety instead.

That said, my motivations differ interestingly from many EA-aligned individuals involved in AI safety and AI research, including Nick Bostrom himself. They think that if we get AI right, then we can not only make the world a much better place, but also expand civilization into the rest of the universe and fill it with happy people. They view this as an additional reason to care about AI.

I find this very strange – to be honest, I am fine with human extinction, as long as it is painless, voluntary, and graceful. I do not see anything wrong with the last generation of humans peacefully living out their lives without wanting any more children. I care more about just ensuring that AI does not cause more suffering and oppression. Unfortunately, I also think AI has great potential to do so. While I am no longer as certain that (painful) human extinction is the default outcome, I remain deeply pessimistic: either humanity will lose control of misaligned AI, or AI will become entrenched as a technology of oppression and control. Just think about what happens whenever any labour-reducing technology is developed: it is the rich who first possess that technology, and who reap most of its benefits, often at the expense of others. I think the same is already happening with AI, and will likely continue to happen. If we do not want this future, we really need to address both the technical and political obstacles to building AI that is truly beneficial for all.

What work have you done in AI safety?

When I first started doing AI research as an undergraduate, everyone was doing machine learning, and I started toying with the idea of applying machine learning to morality. At that point, I was also taking classical Chinese classes where I translated Confucius' works and engaged with his argument that moral education happens through ritual (li). This led me to think about how younger kids learn and internalise norms. Their learning process seemed to interface with a broader capacity to infer what other people think is right (not necessarily what is really right) and to behave accordingly. Instead of trying to ascertain absolute moral truths, which is a difficult question since there is a great deal of disagreement, I decided to focus on the human capacity to learn social and moral norms from their environment and the people around them. I wanted to figure out how we could build this norm-learning capacity into machines.

This inspired my first AI research project as an undergraduate, where I focused on how robots can learn the norms of ownership. We all navigate these norms, but it is pretty hard to figure out the rules. For example, when am I allowed to touch your wallet? There is no overarching rule. Perhaps I can pick up your wallet when you drop it and I need to return it to you, but not otherwise. This was my first foray into the problem of AI value alignment, and I continue to be interested in how we can engineer this capacity for moral learning into machines.

After college, I spent a year doing research at A*STAR. Very fortunately, I ended up working with Prof. Desmond Ong, an AI researcher with a background in computational psychology and cognitive science. A lot of his work centres on computational models of emotion: formal probabilistic models of how humans appraise various events and form emotional responses to them. Through this, I learned about the field of computational cognitive science, which aims to build computational models of how people learn, think and act, as well as how we understand each other’s motivations. I thought this was a promising approach to human-aligned AI, so I applied to the Computational Cognitive Science group at MIT for my PhD. I was lucky to be accepted and am presently conducting research in both this lab and the MIT Probabilistic Computing Project. One recent project asks the question: how do we infer the goals of other people, while accounting for the fact that they might fail to achieve them? I hope this project will lead to AI that can better infer when to assist us by recognizing that our plans might fail, and eventually AI that can understand how our everyday practices fall short of the ideals we most deeply hold.

What work have you done in AI policy?

Besides technical research, I have been involved in some advocacy around AI policy and governance. If you are considering AI as a career and a means of influencing our civilisation in a positive direction, I think it is really important to engage in ways that go beyond just technical research. The policies that our governments put into place can have far-reaching effects, both in the near-term and long-term. I am particularly concerned about how these AI policies can exacerbate existing problems of discrimination, oppression and surveillance, or neglect the importance of AI safety. This is why it feels important for me to engage with AI policy matters despite my technical background. 

I first got involved in AI policy issues when someone in the EA SG AI safety reading group mentioned that a government agency - the Personal Data Protection Commission ("PDPC") - had put out a draft model AI governance framework that was worth  responding to. I ended up reading it, and found many things that were frustratingly inadequate. As such, I got together with a bunch of other people who were interested and came up with a response to the government's framework. First, we pushed the PDPC to clarify the definition of AI - they were focusing on machine learning even though AI encompasses more than that. We thought that they should be clear about the technology they were trying to govern. Second, we wanted the framework to focus on a company's social responsibility to do right by people, rather than its desire to maximise consumer satisfaction. Third, we wanted the framework to engage more with AI policy literature e.g. the risk that unsafe or unfair AI can arise due to inter-company competition. 

I have also contributed an essay to the 2019 White Paper by Live With AI, a French-Singapore think-tank to foster the positive development of AI. In the essay, Neither Indifference Nor Essentialism, I write about how difficult it is to build inclusive AI on a global scale because of varying local cultural conditions. For example, a US-based Google engineer, used to US-centred racial constructs, might not appreciate what inclusiveness means to racial minorities in Singapore. For this reason, I emphasise the importance of diversity in development teams in my piece. These conversations about diversity and equity still feel very lacking in the AI space in Singapore.  

What challenges have you faced with your EA journey?

In my early years of involvement with the EA community, I really struggled to find perspectives that encompassed my commitment to social justice. I was uncomfortable with the fact that, at least in the earlier years of the movement, many EAs were unwilling to engage in the work of systemic and structural change, because it was less evidence-based, and the outcomes a lot more uncertain. By primarily engaging wealthy, privileged individuals, I also felt like EA was never going to become a mass political movement capable of widespread social change like feminism or socialism.

Nonetheless, I came to appreciate the fact that EA communities had important ideas that other people were not talking about. I also really enjoyed the EA community's willingness to question, debate, and change positions in the face of good arguments and evidence. Eventually, I learnt to separate this EA approach of open inquiry, and the corresponding conceptual toolkit, from some of the substantive empirical and moral claims widely held in EA that I do not personally agree with. For example, I am less market-friendly than most high-profile EAs, and also no longer identify as utilitarian – I believe there are moral duties like friendship, reciprocity, and mutual respect that cannot be reduced to maximizing welfare. Once I was able to separate EA methodology from common EA beliefs, I felt less pressure to conform to those beliefs or to defend my own, and that made me more comfortable with EA as a whole. 


What would you like to see more of in EA communities?

I am very lucky to have been exposed to critical theory on social change due to my friends and the courses I took in college. Unfortunately, I have found that this lens on change is often missing in EA communities, which can be frustrating. Because of that, I have had to do a lot of the work of integrating EA and social justice-oriented thinking by myself. I would love to see more of an effort to integrate these two paradigms of achieving social change. I think a lot of productive work can be done in this area - both intellectually and in terms of movement building. In doing this, we would be able to draw more people to the EA community. 

I would also like the EA community to be more welcoming and inclusive. I find that EA's focus on certain demographics (e.g. highly privileged Ivy League or Oxbridge students) and certain cause areas (e.g. existential risk) has left a lot of people on the side-lines. I know I work in an area concerned with existential risk myself, but I was lucky because I just happened to be pursuing a computer science degree when I learned that AI might pose a serious existential threat to humanity. If not for that, I think I would have felt very left out of the whole EA conversation when it started shifting in the direction of existential risk and long-termism.

What advice do you have for EA members grappling with guilt about not doing more?

If you are able to, it really helps to take the Giving What We Can Pledge as a practical first step. Once you lock it in, you know you are already doing something, and you stop having to worry so much about every dollar you spend (e.g. whether you should buy Starbucks or donate the money instead). Very quickly, you become used to donating a portion of your income, and just having a lower amount of disposable income overall. 

In terms of the internal work of trying to live up to my ideals, it has been important for me to acknowledge that I am not selfless, and that is okay – “okay” not in the ideal sense of actually having satisfied all of my moral duties, but in the practical sense of, I cannot be morally perfect because morality is so demanding, and I have to accept that for what it is. It is okay if you think that everyone should be vegan, as I do, but you, like myself, have not made that step. It is okay even if you are a utilitarian who thinks that there are infinite moral obligations, and that you could always be doing more. We should acknowledge that we are all imperfect moral beings.

Rather than set the bar at being the best person morally, which is impossible, I think we should sit with the productive tension between our aspirational standards and our present inability to live up to those standards. This is an okay state to be in, and a state that will lead to growth. Very unfortunately, I have seen some EAs give up, or strongly consider it, because they have not been able to live with this tension. They think "Oh, I am a terrible person. Why should I even try?" That is a false dichotomy: it is not a choice between doing the most good possible all of the time or throwing in the towel entirely. There is an in-between space we can live in, and by abiding there, we allow for the possibility that we can become better people, even if we are not quite there at the moment.

Previous
Previous

Simon FLINT

Next
Next

Vaidehi AGARWALLA