APAC CIO Outlook
  • Home
  • CXO Insights
  • CIO Views
  • Vendors
  • News
  • Conferences
  • Whitepapers
  • Newsletter
  • Awards
Apac
  • Agile

    Artificial Intelligence

    Aviation

    Bi and Analytics

    Big Data

    Blockchain

    Cloud

    Cyber Security

    Digital Infrastructure

    Digital Marketing

    Digital Transformation

    Digital Twin

    Drone

    Internet of Things

    Low Code No Code

    Networking

    Remote Work

    Singapore Startups

    Smart City

    Software Testing

    Startup

  • E-Commerce

    Education

    FinTech

    Healthcare

    Manufacturing

    Retail

    Travel and Hospitality

  • Dell

    Microsoft

    Salesforce

    SAP

  • Cognitive

    Compliance

    Contact Center

    Corporate Finance

    Data Center

    Data Integration

    Digital Asset Management

    Gamification

    HR Technology

    IT Service Management

    Managed Services

    Procurement

    RegTech

    Travel Retail

Menu
    • Business Intelligence
    • Managed Services
    • Blockchain
    • CRM
    • Software Testing
    • E-Commerce
    • Cyber Security
    • Gamification
    • Microsoft
    • Data Integration
    • Low Code No Code
    • MORE
    #

    Apac CIO Outlook Weekly Brief

    ×

    Be first to read the latest tech news, Industry Leader's Insights, and CIO interviews of medium and large enterprises exclusively from Apac CIO Outlook

    Subscribe

    loading

    THANK YOU FOR SUBSCRIBING

    • Home
    • Business Intelligence
    Editor's Pick (1 - 4 of 8)
    left
    BI & Analytics in Aquaculture

    Matthew Leary, CIO, Tassal Operations

    Need and Challenges of Business Intelligence for Small and Medium Enterprises

    Ashok Jade, CIO, Shalimar Paints

    Managing a Major System Change to Reap Organizational and Business Rewards that Extend beyond Technology

    Christopher Dowler, CIO, IAT Insurance Group

    Customer Data Driving Success

    David L. Stevens, CIO, Maricopa County

    Advantages of Cloud Computing for Data Analytics

    Colin Boyd, VP & CIO, Joy Global

    Is Deep Learning Overhyped?

    Ofir Shalev, CTO/CIO, CXA Group

    Technology Trends that will Shape BI in 2017

    Ramesh Munamarty, Group CIO, International SOS

    SNP: The Transformation Company: Modernizing Businesses

    CEO

    right

    Ethical Issues in Artificial Intelligence

    Dr. Tirath Virdee, Director of Artificial Intelligence, CAPITA PLC (LON: CPI)

    Tweet
    content-image

    Dr. Tirath Virdee, Director of Artificial Intelligence, CAPITA PLC (LON: CPI)

    Consider Cambridge Analytica – the British consulting firm which allegedly carried out psychometric profiling of US audiences to enable targeted distribution of material to voters. Simply by creating a Facebook app that was downloaded by about 300k users, it gathered information about around 87 million people (friends and connections of the people who installed the app). The power of simple AI to make deductions from the users of the quiz and the ‘likes’ of their friends makes one realise the astonishing implications of AI. Having access to data and using a very simple algorithm possibly altered the results of key democratic processes.

    Data harvesting and the way it was used by a well-funded determined group is an ethical issue. One almost needs to revisit as to what being a democracy means; people have always relied on market surveys and making deduction and targeting audiences. People will look for the failings in the chain of data privacy, but I feel the issue is far deeper and it will continue to become even more convoluted and complex in the era where big data enables so much. Incidentally the US regulator (The Federal Trade Commission (FTC)) has approved a $5bn fine on Facebook to settle the claim into data privacy violations by a 3-2 vote (a fairly close call in my view).

    Ethics around AI is becoming a big subject; cursory research indicates that more than 17,000 scientific and academic articles have been written on the subject since 2018. There are centres of excellence around the globe and an increasing number of books and blogs on the subject. The issue is also of tremendous importance to companies such as Capita. As an introductory guide, I recommend The Hitchhiker’s Guide to AI Ethics. Many people would see the main ethical issues around AI to fall into the following categories:

    1. Unemployment: consider automation – self-drive vehicles; the automations possible in 78% of manual repetitive processes; the democratisation of knowledge (as tackled on one of my earlier posts); etc. At Capita, we could probably, and will attempt to automate, around 80% of our repetitive procedures and tasks. There are studies that indicate that AI will create more jobs that it gets rid of, but the jury is out on the subject yet. In 2014, roughly the same revenues were generated by the three biggest companies in Detroit and the three biggest companies in Silicon Valley ... only in Silicon Valley there were 10 times fewer employees. There will be a positive spin on the subject like “one day we might look back and think that it was barbaric that human beings were required to sell the majority of their waking time just to be able to live”, but we will have to redefine what work means.

    2. Inequality Created by AI: global commerce depends on goods and services. Those economies and corporate entities that can effectively make use of new technologies can outdo all others. This has the potential for those that can invest (the richer countries, companies, individuals) to dominate the future landscape for wealth. Inequality breeds revolutions and schisms.

    3. Machines Affecting Human Behaviour: as we enter the era when machines can mimic human responses (e.g., Eugene Goostman), beat humans in traditional tests of intelligence (e.g., chess, go, poker), more of us becoming aware as to how machines are altering our behaviour. Look around the people when you are on the tube or when you are in a restaurant. People are glued to their mobile devices – be it playing games or endlessly checking their social media communications. The click baits, and the tremendously optimised A/B testing, ensures that our reward centres keep us addicted to the need to be connected; all in aid for selected marketing. Algorithms increasingly and clandestinely affect everything we do: from how we shop to how we vote. They will increasingly shape how we learn and what we feel is the purpose of existence.

    4. Mistakes Embedded in Learning Algorithms: machines learn through examples – just like humans. Humans are not scalable endlessly and have a finite life time. When machines learn and become good at particular tasks, they are endlessly scalable and have the potential to dominate a particular issue – say assessing insurance claims. In the not too distant future, these algorithms will be so accurate and advanced that no human will be able to compete with their logic. This will make it increasingly difficult to challenge such algorithms even if there are mistakes in those algorithms. Have a look at some of the biggest failures in AI last year 5. Bias Embedded in AI: humans have their memes and cultures that embeds particular biases and prejudices in their being. With machines and AI, it is the data. If the data has bias, then AI will have bias. Vast majority of AI’s applications today are based on the category of algorithms known as deep learning; it is this class of deep-learning algorithms find patterns in data. Algorithms can perpetuate injustice in hiring, retail, and security and may already be doing so in the criminal legal system. Indeed, there have been many examples where the current generation ‘fair’ algorithms perpetuate discrimination.

    When Machines Learn and Become Good at Particular Tasks, they are Endlessly Scalable and Have the Potential to Dominate a Particular Issue

    6. Keeping AI Away from ‘Bad’ Use: There are always people underground (for instance in the dark web) who research the use of AI for nefarious reasons. These involve gaining control of financial systems, weapon systems, personal and commercially sensitive data, disrupting due judicial processes, and much more. Have a look at this much shared YouTube video to give you a relatively naïve and sensationalist feel for the use of AI in warfare. Use of AI in cybersecurity and next generation neural cryptography will be field of increasing importance to ensure that current institutions that we know, and trust can continue to exist. Given that there are so many state actors involved the bad uses of AI, it is almost a non-subject as the technology is almost democratised and ultimately it relies on motivations and need. I believe this is an impossible ask – keeping AI away from bad use.

    7. Unintended Consequences, Singularity and Humane Treatment of AI: Intelligence in systems is superficial and the current generation of deep-learning systems are little more than tensor algebra and calculus with particular ways of efficient optimisation. However, there is research underway to look at the building blocks for generalised artificial intelligence that may lead to a form of self-consciousness. Just as we value the rights of animals and the planet as a whole, there will emerge procedures and declarations of the rights of the machines. This is to enable the machines to classified according to rights and responsibilities rather than for humans to be seen as some form of overlords of all creation. It may be that machines have vastly more intelligence (theoretically 10k times more than is possible for biological systems – just in terms of speed, copper conducts signals 10k faster than biological neural connections), but that their form of consciousness (feelings for reward and aversion) may be very different from ours. We may start by implementing our value system in neuromorphic hardware, but the neuroplasticity (as in being to enable modify their mechanisms for learning) of such system may ensure that they find their own rightful place in nature with the possibility of self-assembly, evolution and purpose. Depending on the value-sets that such systems deem necessary for their own sustenance, humans do ponder the issue of “pulling the plug” if we begin to become irrelevant or a hinderance to such intelligence.

    In an old, and not a well cited, paper Nick Bostrom of the Future of Humanity Institute (FHI) at the University of Oxford, concludes that “Although current AI offers us few ethical issues that are not already present in the design of cars or power plants, the approach of AI algorithms toward more humanlike thought portends predictable complications. Social roles may be filled by AI algorithms, implying new design requirements like transparency and predictability. Sufficiently general AI algorithms may no longer execute in predictable contexts, requiring new kinds of safety assurance and the engineering of artificial ethical considerations. AIs with sufficiently advanced mental states, or the right kind of states, will have moral status, and some may count as persons— though perhaps persons very much unlike the sort that exist now, perhaps governed by different rules. And finally, the prospect of AIs with superhuman intelligence and superhuman abilities presents us with the extraordinary challenge of stating an algorithm that outputs super ethical behavior. These challenges may seem visionary, but it seems predictable that we will encounter them; and they are not devoid of suggestions for present-day research directions.” The FHI divides its work into four main categories: Macrostrategy, AI Safety, Center for the Governance of AI and Biotechnology.

    I hope that by now the reader sees that the issues around the Ethics of AI are deep and wide. AI has the possibility to change us and the fabric of our society. Governance and regulatory frameworks of traditional institutions are no match for technologies and means made possible by AI. One just has to see the debacle around Brexit, elections and selections around the globe to see the impact of using information and advertising selectively and in a targeted manner. Couple this with the nefarious use of AI as well as the possible emergence of superintelligence (read Nick Bostrom’s Superintelligence and James Lovelock’s Novacene: The Coming Age of Hyperintelligence), and one has enough material for a Shakespearian tragedy.

    Check out: Top Healthcare Analytics Companies
    tag

    Healthcare analytics

    Biotechnology

    Financial

    Big Data

    Weekly Brief

    loading
    Top 10 BI and Analytics Consulting/Service Companies - 2020

    Featured Vendors

    Approach Solutions

    Sandy Eastman, Founder & CEO

    Actify

    Chris Jones, CEO & President

    ON THE DECK

    Business Intelligence 2020

    Top Vendors

    Business Intelligence 2019

    Top Vendors

    Business Intelligence 2018

    Top Vendors

    Business Intelligence 2017

    Top Vendors

    Business Intelligence 2016

    Top Vendors

    Previous Next

    I agree We use cookies on this website to enhance your user experience. By clicking any link on this page you are giving your consent for us to set cookies. More info

    Read Also

    A dose of our own medicine

    A dose of our own medicine

    SABINA JANSTROM, IT DIRECTOR, DYNO NOBEL
    Insider Threat

    Insider Threat

    AI is America's best weapon for disrupting health inequities

    AI is America's best weapon for disrupting health inequities

    Michael Dowling, President & Ceo, Northwell Health and Tom Manning, Chairman, Ascertain
    Combating IoT Challenges with Smart Choices

    Combating IoT Challenges with Smart Choices

    Sandeep Babbar, Head Of Technology Innovation, Gwa Group Limited
    Artificial Intelligence regulations and its impact on medical devices

    Artificial Intelligence regulations and its impact on medical devices

    Leo Hovestadt, Director Quality Assurance Elekta
    Blockchain: promises to revolutionise superapps and the trust factor in insurance

    Blockchain: promises to revolutionise superapps and the trust factor in insurance

    Sue Coulter, Head of Group Digital, AIA Group Julian Lo, Director of Digital Engineering, AIA Group
    Data as a Business

    Data as a Business

    Ricardo Leite Raposo, Director of Data & Analytics at B3
    How Digital Transformation Impacts Big Data Analytics

    How Digital Transformation Impacts Big Data Analytics

    Davide Di Blasi, Global Quality and Lean Director , Hilding Anders International
    Loading...

    Copyright © 2023 APAC CIOoutlook. All rights reserved. Registration on or use of this site constitutes acceptance of our Terms of Use and Privacy and Anti Spam Policy 

    |  Sitemap |  Subscribe |   About us

    follow on linkedinfollow on twitter follow on rss
    This content is copyright protected

    However, if you would like to share the information in this article, you may use the link below:

    https://business-intelligence.apacciooutlook.com/cxoinsights/ethical-issues-in-artificial-intelligence-nwid-7124.html