×
 
 Back to all blogs

No Longer Just Hype: 3 Challenges to Applying AI for Good

 

Tue, 10/22/2019 - 12:00

0%

The term “artificial intelligence” was first coined by American computer scientist John McCarthy in 1955. Back then, he believed that "every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it."

Fast forward 64 years, and we’re only just beginning to see his vision become reality. A report released this year by consulting firm KPMG found that many leaders of large organisations who piloted Artificial Intelligence applications three years ago are now starting to move them into production. In the same report, they also said that their investments in AI-related talent and supporting infrastructure would increase by around 50% to 100% in the coming three years.

It’s starting to become clearer how AI can be a force for good – not only in the corporate sector, but also in terms of social impact. 

Another consulting firm, McKinsey, came up with a list of over 160 AI social-impact use cases that could “contribute to tackling cases across all 17 of the UN’s Sustainable Development Goals, potentially helping hundreds of millions of people in both advanced and emerging countries.” About a third of these are already being implemented somewhere in the world, with small tests being done for diagnosing cancer and aiding disaster-relief efforts.

One example of this is happening in Singapore, where scientists at the Agency for Science, Technology and Research's Genome Institute of Singapore developed a new type of AI that can accurately pinpoint roots of gastric cancer.

Yet, AI still has certain challenges to overcome before that potential can be fully realised.

RELATED ARTICLE: IT IS PAST TIME THAT CIOs AND BUSINESSES GET ON BOARD WITH AI

The Need for Internal Governance

While eager to sing the praises of AI, most executives don’t actually understand what’s happening behind the scenes. 

Data scientists and engineers hold all the keys to “the black box,” leaving management scratching their heads when their algorithms produce biased data and completely incorrect information, or are used for even more nefarious purposes.

With this in mind, Singapore released a framework for the responsible and ethical use of AI for local organisations earlier this year. The first of its kind in Asia, according to the Infocomm Media Development Authority, it will “provide detailed and readily implementable guidance to private sector organisations using AI.”

A report released by CIFAR, a Canadian-based global charitable organisation, also found that Singapore’s AI strategy focused primarily on enhancing “the use of AI technologies in the private sector.” Specifically, in the areas of research, AI talent, industrial strategy, and ethics. At the national level, such policies are necessary to prompt conversations about ethics and data inclusion within organisations.

Internally, however, organisations still need to design and deploy standard procedures for monitoring and managing AI. Processes for version control, data lineage, and the ongoing monitoring of algorithms need to be instituted and practised diligently across the board. This enables the management team and leaders to keep an eye on what’s happening within the black box.

Scarcity of AI Talent

The development of innovative AI solutions requires real expertise. Yet, Montreal-based AI company Element AI estimates that there are fewer than 10,000 people in the world who have the relevant skills to tackle “serious AI problems.” 

In Australia, Sally-Ann Williams, CEO at Deep Tech incubator Cicada Innovations, observes that the “number of computer science graduates in Australia still remains low, and yet the corresponding demand on skilled AI practitioners is rising.”

Naturally, the competition to attract such talent is fierce, putting them out of reach for social-good organisations. There is, however, good reason to believe that more people with these specialised skills will be in the market in the near future. 

Case in point: as part of a programme launched by Singapore’s National Research Foundation, AI Singapore will be launching two initiatives – AI for Industry (AI4I) and AI for Everyone (AI4E) – with the intention of equipping 12,000 Singaporeans over the next three years with basic AI knowledge.

Elsewhere, the largest tech companies are playing their part in educating and bringing new AI talent into the world. Google, for instance, invested US$3.4 million in the Montreal Institute for Learning Algorithms, a research lab at the University of Montreal, while Intel donated US$1.5 million to Georgia Tech to establish a research center dedicated to the field of machine learning cybersecurity.

Data Inaccessibility

Thanks to our digitised way of life, there is a tremendous amount of data that is being created, captured, or replicated in the world today. Marketing intelligence company IDC predicts that the world’s data will grow from 33 zettabytes (ZB) in 2018 to 175 ZB by 2025 – a mind-boggling figure.

This is a good thing for the advancement of AI solutions, which require data to learn, adapt, and become better at their jobs. Yet much of the world’s data belongs to the rich and powerful, namely governments and large private companies.

The former is usually open to sharing datasets where they can be used in the public interest – think Data.gov.sg, the Singapore government’s one-stop portal to its publicly available datasets from 70 public agencies. For the latter, however, these huge volumes of data represent potential revenue streams, and tend to be commercialised and sold to the highest bidder instead.

As such, social-good organisations may not have the means to access the datasets they require.

Thankfully, there are prominent companies who are taking a stand and leading the way in sharing data for social good. For example, MasterCard created a “data philanthropy” programme that came about due to the mandate passed down from their board of directors, which was to “think about Mastercard’s assets broadly, and then think about how those assets can be applied for social good.”

So far, the programme has donated data to Harvard’s Center for International Development to aid in a study on how businesses develop and expand across the world, as well as Barack Obama’s Data-Driven Justice Initiative to examine how high crime rates impact local small businesses and job opportunities.

However, Liza Noonan, ASEAN director at the Commonwealth Scientific and Industrial Research Organisation (CSIRO), cautions that “social entrepreneurs need access to good advice and mentorship on what data is available and their obligations when accessing and developing products from that data.” 

Privacy regulations, cybersecurity systems, and good data governance policies, she continues, need to be developed to ensure such data is used in the best way possible.

A Matter of Time

While it is certain that AI will play a massive role in tackling some of society’s most pressing issues, it is also equally clear that there’s some way to go before these use cases can be applied successfully and sustainably – particularly in the area of social good. With some of the world’s best minds on the job, it really is only a matter of time before these challenges are surmounted.

At SGInnovate, we believe that Deep Technology has the potential not only to solve some of the world’s toughest challenges but also to maximise the social good they can do. As such, we recently hosted a roundtable on Deep Tech for Good with CSIRO Australia, where participants shared their challenges, opportunities and potential collaborations in building Deep Tech.

Technology:
AI
Previous

“Liquid ECG” A Game Changer for Heart Patients

Nov 11, 2022

Next

Investment Update: Expanding Our Portfolio with Nine Deep Tech Startups

Nov 11, 2022