CTAI
AI Ethics and Bias, What Students Should Know, CBSE Class 7-8
Learn about AI ethics and bias for CBSE Classes 7-8 CTAI. Covers fairness, bias in AI, responsible AI use, privacy, and real-world ethical dilemmas.
Artificial Intelligence is powerful, but with great power comes great responsibility. AI can sometimes be unfair, make mistakes, or be used in harmful ways. Understanding AI ethics helps us use AI responsibly. This guide explains AI ethics and bias in simple language for CBSE Class 7-8 students.
What are AI Ethics?
AI Ethics is the study of what is right and wrong when creating and using AI systems. Just like we have rules about how people should behave, we need rules about how AI should behave.
Why do we need AI ethics?
- AI makes decisions that affect people's lives, AI can be unfair without anyone realizing it, AI can invade people's privacy, AI mistakes can have serious consequences, We need to make sure AI benefits everyone, not just a few
What is AI Bias?
AI Bias happens when an AI system produces unfair results that favor certain groups over others. Since AI learns from data, if the data is biased, the AI will be biased too.
Simple analogy: Imagine you only eat food from one restaurant your entire life. You would think that restaurant's food is the best because you have no other experience. AI is similar, it only knows what its training data tells it.
How Does Bias Enter AI?
| Source | How Bias Enters | Example |
|---|---|---|
| Biased Training Data | Data does not represent all groups equally | A face recognition AI trained mostly on light-skinned faces performs poorly on dark-skinned faces |
| Biased Labels | Humans label data with their own biases | Labeling certain jobs as "male" or "female" |
| Missing Data | Some groups are not represented | An AI health system that has no data about certain diseases common in India |
| Historical Bias | Data reflects past discrimination | AI trained on historical hiring data that favored one gender |
| Selection Bias | Only certain types of data are collected | A survey conducted only in English-speaking communities |
Real-World Examples of AI Bias
Example 1: Job Hiring AI A company used AI to screen job applications. The AI was trained on past hiring data. Since the company had historically hired more men, the AI started rejecting women's applications. This was unfair because the AI learned the company's past bias.
Example 2: Face Recognition Some face recognition systems work well on light-skinned faces but make more errors on dark-skinned faces. This happens because the training data had more photos of light-skinned people.
Example 3: Language Translation When translating from English to some languages, AI sometimes assumes doctors are male and nurses are female, reflecting gender stereotypes in the training data.
Example 4: Search Results When you search for "CEO" on image search, you might see mostly images of men. This reflects the current reality but also reinforces the stereotype.
Key Principles of AI Ethics
1. Fairness
AI should treat everyone equally, regardless of their gender, race, religion, age, or economic status.
Questions to ask:
- Does this AI work equally well for everyone?
- Are any groups being treated unfairly?
- Would we be comfortable if this AI made a decision about us?
2. Transparency
People should know when AI is being used and how it makes decisions.
Questions to ask:
- Do users know they are interacting with AI?
- Can we explain why the AI made a certain decision?
- Is the AI's decision-making process open to examination?
3. Privacy
AI should respect people's personal information and not collect more data than needed.
Questions to ask:
- What data is being collected?
- Do people know their data is being used?
- Is the data stored securely?
- Is the data being shared with others without permission?
4. Accountability
Someone should be responsible for what AI does. If AI makes a mistake, there should be a way to fix it.
Questions to ask:
- Who is responsible if the AI causes harm?
- Is there a way to correct AI mistakes?
- Can people appeal AI decisions?
5. Safety
AI should not cause harm to people or the environment.
Questions to ask:
- Could this AI be used to hurt someone?
- What happens if the AI makes an error?
- Are there safeguards to prevent misuse?
Privacy and AI
What Data Does AI Collect?
When you use AI-powered apps and services, they collect data about you:
| Data Type | Examples |
|---|---|
| Personal information | Name, age, email, phone number |
| Location data | Where you go, tracked by your phone |
| Browsing history | Websites you visit |
| Search history | What you search for online |
| Social media | Your posts, likes, friends, photos |
| Voice data | Conversations with voice assistants |
| Shopping data | What you buy online |
| Health data | Steps, heart rate from fitness trackers |
How to Protect Your Privacy
- Read privacy policies before signing up for apps
- Use strong passwords and two-factor authentication
- Limit what you share on social media
- Review app permissions - does a game really need access to your contacts?
- Use privacy settings on all your accounts
- Be careful with voice assistants - they may record conversations
- Think before sharing - once online, it is hard to take back
AI and the Environment
AI has an environmental impact too:
| Impact | Explanation |
|---|---|
| Energy consumption | Training large AI models uses enormous amounts of electricity |
| Carbon footprint | Data centers that run AI produce carbon emissions |
| E-waste | Old AI hardware becomes electronic waste |
Positive side: AI can also help the environment by optimizing energy use, predicting weather, monitoring deforestation, and improving agricultural efficiency.
Ethical Dilemmas in AI
An ethical dilemma is a situation where there is no clearly right or wrong answer. Here are some AI ethical dilemmas to think about:
Dilemma 1: Self-Driving Car
A self-driving car's brakes fail. It can either:
- Go straight and hit three pedestrians, Turn and hit one pedestrian
What should the AI choose? This is known as a modern version of the "trolley problem." There is no easy answer, and different people have different opinions.
Dilemma 2: AI in Exams
Should schools use AI to monitor students during online exams?
- For: Prevents cheating and ensures fairness
- Against: Invades student privacy, creates stress, may flag innocent behavior
Dilemma 3: AI Recommendations
Should YouTube's AI recommend videos that keep you watching longer, even if some of those videos contain misinformation?
- For the company: More watching time means more advertising money
- For the user: Misinformation can be harmful
Dilemma 4: AI in Healthcare
Should AI be used to decide who gets medical treatment first?
- For: AI can be faster and more consistent than humans
- Against: AI might discriminate against certain groups based on biased data
Responsible AI Use
For Students
- Think critically about AI-generated content, do not believe everything AI says
- Do not use AI to cheat in exams or homework
- Report if you see AI being used to bully or harass
- Protect your data - be careful about what information you share with AI apps
- Learn about AI - understanding how it works helps you use it responsibly
For AI Developers
- Use diverse, representative training data
- Test for bias before deploying AI
- Make AI decisions explainable
- Get consent before collecting user data
- Build safety measures into AI systems
- Create ways for people to appeal AI decisions
AI Ethics in India
India is working on responsible AI:
- NITI Aayog's #AIForAll strategy emphasizes responsible AI that benefits all Indians
- Data Protection Act protects citizens' personal data
- AI ethics guidelines are being developed for Indian industries, Focus on making AI work for Indian languages, rural communities, and diverse populations
Discussion Questions
Think about these questions and discuss with your classmates:
- Should AI be used to grade student essays? What are the pros and cons?
- Is it fair for companies to use AI to decide who gets a loan?
- Should AI-generated art and writing be considered "real" art?
- How can we make sure AI voice assistants do not eavesdrop on private conversations?
- Should there be a law requiring companies to tell you when you are talking to an AI chatbot?
Key Takeaways
- AI Ethics is about ensuring AI is used fairly and responsibly
- AI Bias happens when AI produces unfair results due to biased data, Five principles: Fairness, Transparency, Privacy, Accountability, Safety
- AI learns from data, so biased data leads to biased AI
- Protect your privacy by being careful about what data you share
- Ethical dilemmas in AI do not have easy answers, Everyone, including students, has a role in using AI responsibly, India's #AIForAll emphasizes inclusive and responsible AI development
Understanding AI ethics makes you a responsible digital citizen. As AI becomes more common in daily life, knowing how to question, evaluate, and use AI responsibly will be one of the most important skills you can have.
Want to learn more?
Explore free chapter-wise notes with quizzes and code playground
Prefer watching over reading?
Subscribe for free.