Abstract
Autonomous systems are increasingly being employed in almost every
possible field. Their level of autonomy in decision-making is also increasing along with
their complexity leading to systems that will soon be making decisions of utmost
importance without any human intervention at all or with the least human involvement.
It is imperative, therefore, that these machines be designed to be ethically aligned with
human values to ensure that they do not inadvertently cause any harm. In this work, an
attempt is made to discuss the salient approaches and issues, and challenges in building
ethically aligned machines. An approach inspired by traditional Eastern thought and
wisdom is also presented.