Chapter 10: Rules and Responsibilities in the AI World — Safety, Ethics, and the Future
The office felt different that morning. Not just because the sun was unusually golden, casting long, thoughtful shadows through the blinds—but because it was the last day of the course.
Ivy arrived early, holding her usual oat milk latte, but today her hands were a little shaky. Maybe it was the caffeine. Maybe it was something else.
Derek was already there, standing near the screen, setting up a new presentation deck. But instead of colorful slides filled with AI-generated content, today’s first slide was simple:
"With Great Power Comes Great Responsibility."
“Ready for the last round?” Derek asked with a grin.
“I don’t know,” Ivy admitted. “I’ve learned so much, but… this part feels heavier. Like we’re crossing into the real-world consequences now.”
“You’re not wrong,” Derek said, turning serious. “Let’s talk about that.”
They began with AI Ethics—a term Ivy had heard tossed around online but never fully grasped. Derek explained it wasn’t just about not creating evil robots. It was about how people use AI: who trains it, who controls it, and what values it reflects.
"Remember the chatbot that turned toxic online?" he asked.
“Yeah,” Ivy replied. “That still blows my mind.”
“It’s not the tech. It’s the data. It’s us. AI learns from what we feed it—literally and metaphorically. So when we build or use AI, we’re also responsible for what it does.”
That thought lingered for a moment.
Then came the Deepfake Demo. Derek played two side-by-side videos—one of a famous politician delivering a public speech, the other, an AI-generated version of the same speech with a completely different message.
“Which one is real?” he asked.
Ivy squinted, played them again. “I... honestly don’t know.”
“That’s the problem,” Derek said. “Fake content can now bypass our instincts. It’s no longer about ‘can I tell this is fake?’ but ‘do I have the tools to verify it?’”
They discussed tools like reverse image search, metadata tracing, and AI-detection platforms. Ivy realized it wasn’t enough to be a creator—she needed to be an investigator, too.
Next came a segment on Data Privacy. Derek explained how even harmless-seeming apps could collect voice data, location history, preferences—all to train future models.
“So that voice filter I used to sound like a robot...”
“...was probably saved to help future filters learn,” he nodded.
It wasn’t paranoia. It was digital hygiene. Ivy scribbled down notes on encryption, consent, and local vs. cloud storage. For the first time, she felt like she had the power to protect herself—and maybe others.
The session closed on a softer note: a reflection circle. Derek dimmed the lights and asked, “So... what did you actually learn?”
Ivy paused. So many images flashed through her mind—her first AI-generated post, her bot persona, the weirdly accurate AI painting of her cat.
“I learned that AI isn’t just some cold machine. It reflects us. And if we want it to be kind, curious, fair... then we have to be.”
Derek nodded, his eyes a little glassy. “And what do you want to create next?”
Ivy smiled. “Something helpful. Maybe an AI assistant that helps college students stay organized without spying on them.”
“Well,” Derek said, reaching into his drawer and pulling out a gift-wrapped notebook. “Then it looks like your next chapter’s just beginning.”