Striking a balance between promoting innovation and safeguarding the rights and interests of individuals and businesses becomes crucial in navigating the ethical landscape of generative artificial intelligence (AI).
As this generative AI evolves, according to a technology expert, regulatory bodies face mounting pressure to address critical issues related to data privacy, security, intellectual property rights, and the potential misuse of AI-generated content.
“From a regulatory standpoint. I think it’s tricky. It is very hard to identify the copyrights. For example, if they’ve used thousands and millions of images to create a new generative AI image, it is very hard to put a finger on who owns a copyright for that,” International Data Corporation (IDC) Asia Pacific associate vice president Deepika Giri said during a media briefing on May 25.
“I think it’s really difficult for any government body to regulate AI simply because it’s a technology that’s so powerful,” she emphasized. “That’s a largely debated topic and they haven’t really found a solution and I don’t see that happening in the near future.”
Generative AI is a branch of computer science that involves unsupervised and semi-supervised algorithms to enable computers to generate new content using existing ones such as text, audio, video, images, and code. It has become a hot topic ever since the launch of Chat GPT in November 2022, attracting two million users within a week of its release.
Based on the IDC study, the majority of local vendors in the Asia Pacific region, primarily from China and South Korea, are seeking to integrate their versions of Chat GPT into gaming, chat, social media, and other applications. About 70 percent of business organizations that were found to be doing some initial exploration of potential use cases are already investing in generative AI technologies in 2023.
It noted that generative AI is expected to have the most significant impact in the next 18 months in two key areas: product design and development, and software development. In Asia Pacific, three use cases are anticipated to hold the most promise for organizations: knowledge management, code generation, and marketing applications.
According to IDC, there has been an exponential growth in AI adoption in Asia Pacific across use cases and applications as it has grown from a mere 20 percent in 2019 to about 76 percent in 2022. Among the mature adopters of AI are China, Japan, Australia, and Korea, while India, Taiwan, and Indonesia have the fastest growing markets in this space.
Giri, who is also the head of IDC research in big data and analytics (BDA) and AI across the Asia Pacific region, highlighted that generative AI, by leveraging machine learning to infer information, poses the challenge of potential inaccuracies that need to be acknowledged.
She cited deep fake—a video of a person in which the face or body has been digitally altered to appear to be someone else—as something that can be used to spread false information.
Such AI-generated content, Giri noted, can be difficult or impossible to distinguish from real media, posing serious ethical implications. “Ethical concern around generative AI is the ambiguity over authorship and copyright of the AI-generated content.”
Data security and privacy are also significant aspects because uploading of personal or proprietary information for the purposes of training these models could expose sensitive details, she added.
“It’s important to understand these risks and what are the threats to the organization, and what kind of data leakage [is] possible. And it’s important to audit and remove all personal identifiers or any sensitive information before [sorting] or sending the data to the model,” she explained. “It is important to also review the generated content drafts to clients and ensure customer privacy and data confidentiality.”
Despite the existing concerns, the tech expert emphasized that there is currently no legislation specifically addressing Generative AI in the Asia Pacific region, as it is viewed as “an impediment to the spirit of innovation in a digital economy.”
For Dr. Chris Marshall, vice president for data, analytics, AI, future of work, and sustainability at IDC Asia Pacific, one of the challenges lies in the difficulty of regulating generative AI, both in terms of controlling the technology’s advancement and governing the usage of data derived from it.
“I think the regulatory landscape is still very, very fluid at this point,” Marshall said. “The reality is, particularly in Asia, people are talking about guidelines, policies, and recommendations for [corporations]. [In] general, it’s really very, very early days.”
In the context of current developments, the Indian government has opted not to impose AI regulations on the digital economy, reasoning that strict laws could impede innovation and research. On the other hand, the Cyberspace Administration of China has introduced security assessments and evaluations of generative AI services before their public release, emphasizing the importance of ensuring safety and impact.
Implications for skilled workers, employment, media businesses
Marshall said the availability of skilled workers is currently a problem in the short term. However, the problem has been reduced to a significant extent in the past few months due to a decline in the job market.
This decline, the IDC official noted, has specifically affected larger organizations that used to attract most of the talented professionals in the fields of AI and data science.
“A downturn obviously is bad news for them, but in one sense, it frees up some of the talent to augment the broader data science and AI skills in the region. And this is actually a good thing because it’s sort of a way of democratizing the capability of AI more broadly,” Marshall said.
He said the widespread adoption of new technologies is likely to result in some unemployment in certain sectors, which is almost unavoidable.
But Marshall believes that this will also lead to the creation of new jobs, and whether one will balance out the other is uncertain, adding that evidence indicates that most technologies have actually generated more jobs than they have eliminated.
He also said that the new technology does have implications for the media business, posing risks and concerns among the industry as “what tends to happen is the value
of the good content decreases because people can’t tell the difference between true and false content.”
“There’s lots of proposed ways of fixing it, but it’s still early days. It’s not clear how you’ll be able to tell these sorts of fake media content stories apart from the true ones.” — Franz Lewin Embudo
Spotlight is BusinessWorld’s sponsored section that allows advertisers to amplify their brand and connect with BusinessWorld’s audience by enabling them to publish their stories directly on the BusinessWorld website. For more information, send an email to firstname.lastname@example.org.