Are we as humans sure of our own ethics before it’s transfered to our machines? January 10, 2018 | Ambarish Mitra | Artificial Intelligence | Leave a comment At the crux of human understanding lies the knowledge of the past, uncertainty, a belief surrounding the uncertainty and the present, that is being lived and observed. Learning and the art of acquiring knowledge through observed and building a belief around the uncertainty; at times minimising the uncertainty then forms the basis of living. This, further structures the propagation of knowledge and helps in adding building blocks to understanding the universe that we are all a part of. While a lot of emphasis is put on learning and classifications, while minimising this uncertainty, the true knowledge of most events and occurrences in life are not known and are not measurable through experiments. This then makes most scientists attach a confidence bound to their results and understandings because it is all approximated. One major school of thought that originated through Thomas Bayes’ work provided a theory to classify and categorise beliefs in terms of probability and by using past information or evidences, project the future or in other words calculate the predictive probabilities of events. A major portion of classifications, pattern recognition and text mining, to name a few, work under the Bayesian framework of analysis; thus mimicking the human understanding with some accuracy. There also is a huge literature that presents the reward and punishment game making it act as one of the core components associated with learning. While the wisdom of crowds and the famous experiment of Sir Thomas Galton that aggregated the knowledge of the crowd account for a major shift and also present a way in which every single individual’s opinion is given some weight in decision making under uncertainty; what is crucial in this framework is the manner in which aggregation is carried out in the face of uncertainty. If we were to deduce this dependency structure amongst a crowd with similar knowledge, literature provides an evidence that we are able to reduce the uncertainty surrounding the underlying quantity of interest; however one relevant question is this that is it always possible to address this dependency statistically? As we are stepping largely into a world where we are inherently interested in training machines and creating an environment where the machines are able to ease off the burden of say verification on whether you are the exact same account holder of a bank through your voice; the machines need to be able to work out all the possible permutations and combinations around the voice in terms of its pitch and ham. This not only seems like a mammoth task but is also beyond comprehension for every single individual holding an account with even a single bank. While I am writing this, I am sure there are scientists who are heavily into researching unsupervised learning mechanisms trying to make machines smarter in order to get them to simulate all possible permutations and combinations; however, the issue is with the statistical performance of algorithms that cease to ignore the dependencies surrounding beliefs. As much as the human dependencies based on shared understanding and shared environment is prevalent, dependencies surrounding machines and in the algorithms that are written to train them to act and perform will be heavily conditional. The conditions will then determine and set the tone of their work performance and usage. In order to achieve machines to be able to think like humans, as to the best of my knowledge, we are still not able to put a bound on consciousness, the idea is far fetched. However, a recent article in the Economist did touch upon the rule book of ethics for robots, which does take me back into asking whether we, as humans, are really sure of our ethics to train machines into thinking like us? While I am still in the middle of my research, I am a strong advocate of characterising dependencies that exists between each of us, even in the machines that we create; so that the propagation of our beliefs are structured better minimising the risks attached to uncertainty. Please follow and like us:20