Karol Bagh | GS Foundation Course | 29 April, 11:30 AM Call Us
This just in:

State PCS

Daily Updates


Science & Technology

Safeguarding Children in the Age of AI

  • 29 Sep 2023
  • 12 min read

This editorial is based on the article Children, a key yet missed demographic in AI regulation which was published in The Hindu on 26/09/2023. It talks about the nature of digital services with particular emphasis on cutting-edge AI deployments which are not designed specifically for children even though they are accessed by them.

India is gearing up to host global AI summits, highlighting the strategic importance of AI for the Indian Economy. India is scheduled to host two significant meetings focused on Artificial Intelligence (AI) in the later months of the year 2023. The first of these gatherings is set to take place in October 2023, marking the world's inaugural global AI summit. Following this, in December 2023, India will assume the leadership of the Global Partnership on Artificial Intelligence (GPAI).

But with this technological advancement comes the pressing need for robust regulation. Children and adolescents, in particular, are vulnerable to various risks associated with AI, and India's existing data protection laws may fall short in addressing these challenges.

What is AI Regulation?

AI regulation refers to the rules, laws, and guidelines established by governments and regulatory bodies to govern the development, deployment, and use of artificial intelligence technologies.

The primary aim of AI regulation is to ensure that AI systems are developed and used in ways that are safe, ethical, and beneficial to society while mitigating potential risks and harms. AI regulation can cover a wide range of aspects, including:

  • Safety and Reliability: Regulations may require AI developers to adhere to safety standards to prevent accidents or malfunctions caused by AI systems. This is particularly important in critical domains like autonomous vehicles or medical diagnostics.
  • Ethical Considerations: Some AI applications, especially in critical areas like healthcare or finance, may be required to have human oversight to ensure that AI decisions are in line with human values and ethics.
  • Data Privacy: Many AI systems rely on large amounts of data. Regulations like the European Union's General Data Protection Regulation (GDPR) set standards for how personal data should be handled and protected in AI applications.
  • Transparency and Accountability: Some regulations may require AI developers to provide transparency into their algorithms, making it easier to understand how AI systems make decisions..
  • Export Controls: Governments may regulate the export of AI technologies to prevent sensitive AI capabilities from falling into the wrong hands.
  • Compliance and Certification: AI developers may need to comply with specific certification requirements to ensure their AI systems meet regulatory standards.
  • International Cooperation: Given the global nature of AI, there is also a growing need for international cooperation on AI regulation to avoid conflicts and ensure consistent standards.

What are AI Regulatory Laws Around the World ?

  • European Union (EU): The EU is working on the draft Artificial Intelligence Act, aiming to regulate AI comprehensively. This legislation is expected to address various aspects of AI, including risk classification, data subject rights, governance, liability, and sanctions.
  • Brazil: Brazil is in the process of developing its first AI regulation. The proposed regulation focuses on guaranteeing the rights of individuals affected by AI systems, classifying the level of risk, and implementing governance measures for AI operators. It has parallels with the EU's draft AI Act.
  • China: China has been actively regulating AI, with specific provisions for algorithmic recommendation systems and deep synthesis technologies. China's Cyberspace Administration is also considering measures to ensure the safety and accuracy of AI-generated content.
  • Japan: Japan has adopted a set of social principles and guidelines for AI developers and companies. While these measures are not legally binding, they reflect the government's commitment to responsible AI development.
  • Canada: Canada has introduced the Digital Charter Implementation Act 2022, which includes the Artificial Intelligence and Data Act (AIDA). AIDA aims to regulate the trade in AI systems and address potential harms and biases associated with high-performance AI.
  • United States: The U.S. has released non-binding guidelines and recommendations for AI risk management. The White House has also published a Blueprint for the Development, Use, and Deployment of Automated Systems.
  • India: India is considering the establishment of a supervisory authority for AI regulation. Working papers suggest the government's intention to introduce principles for responsible AI and coordination across various AI sectors.
    • Given the sheer volume of data that India can generate, it has an opportunity to set a policy example for the Global South. Observers and practitioners will track India’s approach to regulation and how it balances AI’s developmental potential against its collateral risks.
    • One area where India can assume leadership is how regulators address children and adolescents who are a critical (yet less understood) demographic in this context.

Why is there a Need for Robust AI Regulation for Child Safety ?

  • Regulating AI for overall Safety:
    • Regulations should focus on aligning incentives to tackle addiction, mental health issues, and overall safety concerns.
    • There are risks of data-hungry AI services deploying deceptive practices to exploit impressionable youth.
  • Body Image and Cyber Threats:
    • AI-driven distortions of physical appearance can lead to body image issues among young people.
    • AI's role in spreading misinformation, radicalization, cyberbullying and sexual harassment is potentially significant.
  • Impact of Family's Online Activity:
    • Parents sharing their children's photos online can expose adolescents to risks.
  • Deep Fake Vulnerabilities:
    • AI-powered deep fakes can target young individuals, including morphed explicit content distribution.
  • Intersectional Identities and Bias:
    • There is a diverse landscape of gender, caste, tribal identity, religion, and linguistic heritage in India.
    • There could be potential transposition of real-world biases into digital spaces, impacting marginalized communities.
  • Reevaluating Data Protection Laws:
    • The current data protection framework in India lacks effectiveness in protecting children's interests.
    • The ban on tracking children's data by default can limit the benefits of personalization.

What can India do to Protect Young Citizens while preserving the Benefits of AI?

  • Drawing from UNICEF's Guidance:
    • UNICEF's guidance, based on the UN Convention on the Rights of the Child, emphasizes nine requirements for child-centric AI.
    • This guidance can be used to create a digital environment that promotes children's well-being, fairness, safety, transparency, and accountability.
  • Embracing Best Practices :
    • The Californian Act serves as a template, advocating for transparency in default privacy settings and assessing potential harm to children from algorithms and data collection.
    • Establishment of institutions like Australia's Online Safety Youth Advisory Council can be considered.
  • Age-Appropriate Design Code for AI:
    • Indian authorities should encourage research to collect evidence regarding the impact of AI on Indian children and adolescents.
    • Gathered evidence can be set as a foundation for developing an Indian Age-Appropriate Design Code for AI.
  • Role of the Digital India Act (DIA):
    • The upcoming Digital India Act (DIA) should enhance protection for children interacting with AI.
    • It should promote safer platform operations and user interface designs.
  • Child-Friendly AI Products and Services :
    • AI-driven platforms should ensure to offer age-appropriate content and services that enhance education, entertainment, and overall well-being.
    • Robust parental control features that allow parents to monitor and limit their children's online activities should be implemented.
  • Digital Feedback Channels:
    • Child-friendly online feedback channels where children can share their AI-related experiences and concerns should be developed.
    • Interactive tools like surveys and forums should be used to gather inputs.
  • Spreading the Message:
    • Public awareness campaigns should highlight the importance of children's participation in shaping AI's future.
    • Influencers and role models may be involved to amplify the message.

Conclusion

In the era of rapidly advancing AI, Indian regulation must prioritize the interests and safety of its young citizens. Incorporating global best practices, fostering dialogue with children, and developing adaptable regulations are essential steps toward ensuring a secure and beneficial digital environment for India's youth.

Drishti Mains Question

While artificial intelligence holds immense potential for the Indian economy, it is not without its challenges. Emphasize the importance of strong AI regulation to ensure child safety and propose measures to address this need.

UPSC Civil Services Examination, Previous Year Question (PYQ)

Q1. With the present state of development, Artificial Intelligence can effectively do which of the following? (2020)

  1. Bring down electricity consumption in industrial units
  2. Create meaningful short stories and songs
  3. Disease diagnosis
  4. Text-to-Speech Conversion
  5. Wireless transmission of electrical energy

Select the correct answer using the code given below:

(a) 1, 2, 3 and 5 only
(b) 1, 3 and 4 only
(c) 2, 4 and 5 only
(d) 1, 2, 3, 4 and 5

Ans: (b)

close
SMS Alerts
Share Page
images-2
images-2