X
Innovation

How Singapore is creating more inclusive AI

A bespoke model might be the answer to Western-focused LLMs - here's what it can do for Southeast Asia.
Written by Eileen Yu, Senior Contributing Editor
gettyimages-1839917800
Weiquan Lin/Getty

As the adoption of generative artificial intelligence (AI) grows, it appears to be running into an issue that has also plagued other industries: a lack of inclusivity and global representation. 

Encompassing 11 markets, including Indonesia, Thailand, and the Philippines, Southeast Asia has a total population of some 692.1 million people. Its residents speak more than a dozen main languages, including Filipino, Vietnamese, and Lao. Singapore alone has four official languages: Chinese, English, Tamil, and Malay. 

Most major large language models (LLMs) used globally today are non-Asian focused, underrepresenting huge pockets of populations and languages. Countries like Singapore are looking to plug this gap, particularly for Southeast Asia, so the region has LLMs that better understand its diverse contexts, languages, and cultures.

The country is among other nations in the region that have highlighted the need to build foundation models that can mitigate data bias in current LLMs originating from Western countries. 

According to Leslie Teo, senior director of AI products at AI Singapore (AISG), Southeast Asia needs models that are powerful and reflect the diversity of its region. AISG believes the solution comes in the form of Southeast Asian Languages in One Network (SEA-LION), an open-source LLM that is touted to be smaller, more flexible, and faster compared to others on the market today. 

Also: Connected companies are set up for the AI-powered economy

SEA-LION, which AISG manages and leads development on, currently runs on two base models: a three-billion-parameter model, and a seven-billion-parameter model. 

Pre-trained and instruct-tuned for Southeast Asian languages and cultures, they were trained on 981 billion language tokens, which AISG defines as fragments of words created from breaking down text during the tokenization process. These fragments include 623 billion English tokens, 128 billion Southeast Asia tokens, and 91 billion Chinese tokens.  

Existing tokenizers of popular LLMs are often English-centric -- if very little of their training data reflects that of Southeast Asia, the models will not be able to understand context, Teo said. 

He noted that 13% of the data behind SEA-LION is Southeast Asian-focused. By contrast, Meta's Llama 2 only contains 0.5%. 

A new seven-billion-parameter model for SEA-LION is slated for release in mid-2024, Teo said, adding that it will run on a different model than its current iteration. Plans are also underway for 13-billion and 30-billion parameter models later this year. 

He explained that the goal is to improve the performance of the LLM with bigger models capable of making better connections or that have zero-shot prompting capabilities and stronger contextual understanding of regional nuances.

Teo noted the lack of robust benchmarks available today to evaluate the effectiveness of an AI model, a void Singapore is also looking to address. He added that AISG aims to develop metrics to identify whether there is bias in Asia-focused LLMs.

As new benchmarks emerge and the technology continues to evolve, new iterations of SEA-LION will be released to achieve better performance. 

Also: Singapore boosts AI with quantum computing and data centers

Better relevance for organizations 

As the driver behind regional LLM development with SEA-LION, Singapore plays a key role in building a more inclusive and culturally aware AI ecosystem, said Charlie Dai, vice president and principal analyst at market research firm Forrester.

He urged the country to collaborate with other regional countries, research institutions, developer communities, and industry partners to further enhance SEA-LION's ability to address specific challenges, as well as promote awareness about its benefits.

According to Biswajeet Mahapatra, a principal analyst at Forrester, India is also looking to build its own foundation model to better support its unique requirements. 

"For a country as diverse as India, the models built elsewhere will not meet the varying needs of its diverse population," Mahapatra noted. 

By building foundation AI models at a national level, he added that the Indian government would be able to provide larger services to citizens, including welfare schemes based on various parameters, enhanced crop management, and healthcare services for remote parts of the country. 

Furthermore, these models ensure data sovereignty, improve public sector efficiency, boost national capacity, and drive economic growth and capabilities across different sectors, such as medicine, defense, and aerospace. He noted that Indian organizations were already working on proofs of concept, and that startups in Bangalore are collaborating with the Indian Space Research Organization and Hindustan Aeronautics to build AI-powered solutions. 

Asian foundation models might perform better on tasks related to language and culture, and be context-specific to these regional markets, he explained. Considering these models are able to handle a wide range of languages, including Chinese, Japanese, Korean, and Hindi, leveraging Asian foundational models can be advantageous for organizations operating in multilingual environments, he added.

Dai anticipates that most organizations in the region will adopt a hybrid approach, tapping both Asia-Pacific and US foundation models to power their AI platforms. 

Furthermore, he noted that as a general practice, companies follow local regulations around data privacy; tapping models trained specifically for the region supports this, as they may already be finetuned with data that adhere to local privacy laws. 

In its recent report on Asia-focused foundation models, of which Dai was the lead author, Forrester described this space as "fast-growing," with competitive offerings that take a different approach to their North American counterparts, which built their models with similar adoption patterns. 

"In Asia-Pacific, each country has varied customer requirements, multiple languages, and regulatory compliance needs," the report states. "Foundation models like Baidu's Ernie 3.0 and Alibaba's Tongyi Qianwen have been trained on multilingual data and are adept at understanding the nuances of Asian languages."

Its report highlighted that China currently leads production with more than 200 foundation models. The Chinese government's emphasis on technology self-reliance and data sovereignty are the driving forces behind the growth.

However, other models are emerging quickly across the region, including Wiz.ai for Bahasa Indonesia and Sarvam AI's OpenHathi for regional Indian languages and dialects. According to Forrester, Line, NEC, and venture-backed startup Sakana AI are among those releasing foundation models in Japan. 

"For most enterprises, acquiring foundation models from external providers will be the norm," Dai wrote in the report. "These models serve as critical elements in the larger AI framework, yet, it's important to recognize that not every foundation model is of the same [caliber]. 

Also: Google plans $2B investment for data center and cloud buildout in Malaysia

"Model adaptation toward specific business needs and local availability in the region are especially important for firms in Asia-Pacific," he continued. 

Dai also noted that professional services attuned to local business knowledge are required to facilitate data management and model fine-tuning for enterprises in the region. He added that the ecosystem around local foundation models will, therefore, have better support in local markets.

"The management of foundation models is complex and the foundation model itself is not a silver bullet," he said. "It requires comprehensive capabilities across data management, model training, finetuning, servicing, application development, and governance, spanning security, privacy, ethics, explainability, and regulatory compliance. And small models are here to stay."

Dai also advised organizations to have "a holistic view in the evaluation of foundation models" and maintain a "progressive approach" in adopting gen AI. When evaluating foundation models, he recommended companies assess three key categories: adaptability and deployment flexibility; business, such as local availability; and ecosystem, such as retrieval-augmented generation (RAG) and API support. 

Maintaining human-in-the-loop AI

When asked if it was necessary for major LLMs to be integrated with Asian-focused models -- especially as companies increasingly use gen AI to support work processes like recruitment -- Teo underscored the importance of responsible AI adoption and governance.

"Whatever the application, how you use it, and the outcomes, humans need to be accountable, not AI," he said. "You're accountable for the outcome, and you need to be able to articulate what you're doing to [keep AI] safe."

He expressed concerns that this might not be adequate as LLMs become a part of everything, from assessing resumes to calculating credit scores.

"It's disconcerting that we don't know how these models work at a deeper level," he said. "We're still at the beginning of LLM development, so explainability is an issue."

He highlighted the need for frameworks to enable responsible AI—not just for compliance but also to ensure that customers and business partners can trust AI models used by organizations. 

Also: Generative AI may be creating more work than it saves

As Singapore Prime Minister Lawrence Wong noted during the AI Seoul Summit last month, risks need to be managed to guard against the potential for AI to go rogue -- especially when it comes to AI-embedded military weapon systems and fully autonomous AI models.

"One can envisage scenarios where the AI goes rogue or rivalry between countries leads to unintended consequences," he said, as he urged nations to assess AI responsibility and safety measures. He added that "AI safety, inclusivity, and innovation must progress in tandem."

As countries gather over their common interest in developing AI, Wong stressed the need for regulation that does not stifle its potential to fuel innovation and international collaboration. He advocated for pooling research resources, pointing to AI Safety Institutes around the world, including in Singapore, South Korea, the UK, and the US, which should work together to address common concerns. 

Editorial standards