THANK YOU FOR SUBSCRIBING
Editor's Pick (1 - 4 of 8)

Ethical Issues in Artificial Intelligence
Dr. Tirath Virdee, Director of Artificial Intelligence, CAPITA PLC (LON: CPI)


Dr. Tirath Virdee, Director of Artificial Intelligence, CAPITA PLC (LON: CPI)
4. Mistakes Embedded in Learning Algorithms: machines learn through examples – just like humans. Humans are not scalable endlessly and have a finite life time. When machines learn and become good at particular tasks, they are endlessly scalable and have the potential to dominate a particular issue – say assessing insurance claims. In the not too distant future, these algorithms will be so accurate and advanced that no human will be able to compete with their logic. This will make it increasingly difficult to challenge such algorithms even if there are mistakes in those algorithms. Have a look at some of the biggest failures in AI last year 5. Bias Embedded in AI: humans have their memes and cultures that embeds particular biases and prejudices in their being. With machines and AI, it is the data. If the data has bias, then AI will have bias. Vast majority of AI’s applications today are based on the category of algorithms known as deep learning; it is this class of deep-learning algorithms find patterns in data. Algorithms can perpetuate injustice in hiring, retail, and security and may already be doing so in the criminal legal system. Indeed, there have been many examples where the current generation ‘fair’ algorithms perpetuate discrimination.
6. Keeping AI Away from ‘Bad’ Use: There are always people underground (for instance in the dark web) who research the use of AI for nefarious reasons. These involve gaining control of financial systems, weapon systems, personal and commercially sensitive data, disrupting due judicial processes, and much more. Have a look at this much shared YouTube video to give you a relatively naïve and sensationalist feel for the use of AI in warfare. Use of AI in cybersecurity and next generation neural cryptography will be field of increasing importance to ensure that current institutions that we know, and trust can continue to exist. Given that there are so many state actors involved the bad uses of AI, it is almost a non-subject as the technology is almost democratised and ultimately it relies on motivations and need. I believe this is an impossible ask – keeping AI away from bad use.
7. Unintended Consequences, Singularity and Humane Treatment of AI: Intelligence in systems is superficial and the current generation of deep-learning systems are little more than tensor algebra and calculus with particular ways of efficient optimisation. However, there is research underway to look at the building blocks for generalised artificial intelligence that may lead to a form of self-consciousness. Just as we value the rights of animals and the planet as a whole, there will emerge procedures and declarations of the rights of the machines. This is to enable the machines to classified according to rights and responsibilities rather than for humans to be seen as some form of overlords of all creation. It may be that machines have vastly more intelligence (theoretically 10k times more than is possible for biological systems – just in terms of speed, copper conducts signals 10k faster than biological neural connections), but that their form of consciousness (feelings for reward and aversion) may be very different from ours. We may start by implementing our value system in neuromorphic hardware, but the neuroplasticity (as in being to enable modify their mechanisms for learning) of such system may ensure that they find their own rightful place in nature with the possibility of self-assembly, evolution and purpose. Depending on the value-sets that such systems deem necessary for their own sustenance, humans do ponder the issue of “pulling the plug” if we begin to become irrelevant or a hinderance to such intelligence.
In an old, and not a well cited, paper Nick Bostrom of the Future of Humanity Institute (FHI) at the University of Oxford, concludes that “Although current AI offers us few ethical issues that are not already present in the design of cars or power plants, the approach of AI algorithms toward more humanlike thought portends predictable complications. Social roles may be filled by AI algorithms, implying new design requirements like transparency and predictability. Sufficiently general AI algorithms may no longer execute in predictable contexts, requiring new kinds of safety assurance and the engineering of artificial ethical considerations. AIs with sufficiently advanced mental states, or the right kind of states, will have moral status, and some may count as persons— though perhaps persons very much unlike the sort that exist now, perhaps governed by different rules. And finally, the prospect of AIs with superhuman intelligence and superhuman abilities presents us with the extraordinary challenge of stating an algorithm that outputs super ethical behavior. These challenges may seem visionary, but it seems predictable that we will encounter them; and they are not devoid of suggestions for present-day research directions.” The FHI divides its work into four main categories: Macrostrategy, AI Safety, Center for the Governance of AI and Biotechnology.
I hope that by now the reader sees that the issues around the Ethics of AI are deep and wide. AI has the possibility to change us and the fabric of our society. Governance and regulatory frameworks of traditional institutions are no match for technologies and means made possible by AI. One just has to see the debacle around Brexit, elections and selections around the globe to see the impact of using information and advertising selectively and in a targeted manner. Couple this with the nefarious use of AI as well as the possible emergence of superintelligence (read Nick Bostrom’s Superintelligence and James Lovelock’s Novacene: The Coming Age of Hyperintelligence), and one has enough material for a Shakespearian tragedy.
Check out: Top Healthcare Analytics Companies
When Machines Learn and Become Good at Particular Tasks, they are Endlessly Scalable and Have the Potential to Dominate a Particular Issue
Weekly Brief
I agree We use cookies on this website to enhance your user experience. By clicking any link on this page you are giving your consent for us to set cookies. More info
Read Also
New Hr Capabilities To Face Evolving Technologies
Anti Deisnasari, Director Of Compliance, Seabank Indonesia
Strengthening The Compliance Fortress In The Banking Sector
Chuan Lim Ang, Managing Director And Sg Head Of Compliance, Cimb
Navigating Legal Challenges By Adapting To Technological Shifts
Valerie Feria Amante, Chief Legal, Ethics & Compliance Officer, Jollibee Group Of Companies
Compliance In The Medtech Industry
Tomoko Chantelle Kondo, Head Of Legal & Compliance, Arthrex Japan
How Can The American Trade Finance Companies Manage Present (And Future?) Chinese Mineral Export Control Measures?
Thomas Lagriffoul, Regional Director Of Compliance Apac, Thomas Lagriffoul Coface
Optimizing Customer Experiences Through Data-Driven Strategies
Indra Hidayatullah, Information Technology Operation Division Head, Pt. Bank Tabungan Negara
Customer-Oriented And Compliance Mindsets In Claims Management
Alex Lee Li Haojun, Group Claims Manager – Insurance, Mapletree
Optimizing Business Efficiency with a Multi-Disciplinary Legal Operations Team
Shulin Tay,Head Of Legal And Compliance - Singapore, Revolut
