CTRL+ALT+Recit, Part V: The Ongoing Rewrite
- drakedantzler

- Oct 29
- 3 min read
This post in the final entry in a series of posts about adapting the libretto of Alcina for a modern audience and performers with the help of ChatGPT.
We are now in the implementation phase of the Alcina libretto.Like any new text, the act of singing a line—or hearing it land in the room—often dictates a rewrite. A line comes up in rehearsal, a student raises an eyebrow, and I end up back here, in conversation with ChatGPT.
✏️ Example: Refining the Line
Original exchange
“I know exactly what kind of vibe made her lock in on me.”“That vibe’s long expired.”
The problem was simple but revealing: the repetition of vibe dulled the punchline.The rhythm worked, but the echo made the text sound unpolished—too self-aware, too written.
Step 1: Identify the Issue
We realized we weren’t fixing a meaning problem; we were fixing a tone problem.The line needed to sound natural, like real conversation—not like a parody of one.
Step 2: Keep the Function, Change the Texture
We kept the structure but swapped vibe for a term that felt more grounded:
“Your game’s long expired.”
That version preserved the rhythm but landed with more confidence and less irony—closer to how a modern student might actually deliver it.
Step 3: Adjust the Register
The next step was to make long expired sound more casual. We generated a list of options and read them aloud in rhythm with the score. ChatGPT created:
Your game’s worn out.
Your game’s played out.
Your game’s gone cold.
Your game’s run dry.
Your game’s old news.
Step 4: Test and Choose
I rand this in my mind, choosing the one that fit the feeling and solved the tone issue. This is the main way that my personal creativity exists, as an editor, as opposed to a writer. And this is key difference in creativity between writing a new libretto with ChatGPT.
Takeaway
This is the micro-process we repeat hundreds of times:
keep the dramatic purpose intact,
modernize the sound,
test it with real voices,
choose what lands in the room.
The goal isn’t just to modernize Handel—it’s to let the characters speak the way our students might actually speak, without losing the emotional charge underneath.
The Living Draft
Every decision we make gets tested in rehearsal and often rewritten after student feedback. We listen for what lands, what confuses, what feels right to sing. The cast has become a second layer of editors—and they’ve learned to question the text openly. If a phrase feels stiff, we fix it together. We have an open conversation, and generally I let them choose what feels right to them.
Frankly, I’m an old person and out of touch with the language anyway.
That shared flexibility is what makes this collaboration with ChatGPT so different. Because the AI isn’t a living collaborator—it doesn’t have an ego, a schedule, or a stake in a specific phrasing—the students and I feel completely comfortable pushing, rejecting, revising.
It’s a creative dialogue without fear of offense. The pressure to get it right gives way to curiosity:
What if we tried it this way? What if the line leaned younger, bolder, truer?
A New Kind of Collaboration
Working with ChatGPT this way has blurred the boundary between direction, translation, and authorship. It’s still Handel’s music, still the emotional architecture of 1735—but the words now carry the diction of 2025, shaped by collective feedback and digital improvisation.
If earlier entries chronicled how we adapted Handel, this one marks the moment we accepted that the work will never truly be finished. Each rehearsal, each conversation, each tweak is part of the performance.
In that sense, Alcina has become not just a production, but a living document of collaboration—between composer, director, students, and yes, an algorithm.
And maybe that’s fitting. After all, our Alcina’s world has always been one built on illusion—and what is a chatbot, if not another kind of spell?




Comments