Reflections on AI, Alcina, and the Artistic Process
- drakedantzler

- Feb 5
- 7 min read
Who I Am, and Why I Did This
I am a professor and director of opera at a state university, a position I have held for more than fifteen years. I create art on a limited budget, from a deeply student-centered point of view, and I frequently adapt or modernize works in order to promote engagement and learning. I operate in a high–resource-demand art form within a chronically scarce-resource environment.
I approached Alcina in a spirit of curiosity about what AI might bring to our production process: efficiency, scale, and new avenues for exploration. All of the AI-assisted work discussed here was done using the paid version of ChatGPT.
I divided my evaluations of AI's ability into grades. I am a teacher, after all. But I first divided those into general categories: Organization and Logistics, Plot and Concept Development, Script and Text Creation, Graphics and Visual Design, Technical Advisor.
Note on Evaluation and Scope
Throughout this essay, I use letter grades (A–F) to evaluate specific uses of AI within this project. These grades are not judgments of AI’s inherent value, nor predictions about its future development. They are situational, practical assessments: how useful each application was to me, in this institutional, artistic, and pedagogical context, at this moment in time.
Different artists, institutions, or projects would almost certainly arrive at different evaluations. That variability is part of the point.
AI in Organization and Logistics
As a Callback Cut Selector - Grade: B+
ChatGPT was able to select cuts that broadly matched my requests and could make complicated, nuanced suggestions that reflected different tempi and historic performance practices. It also made a few completely random suggestions and consistently struggled with the multiple editions of Alcina that exist. This confusion around source material became a recurring issue throughout the process.
As a Semester Rehearsal Organizer - Grade: A
This was one of its strongest use cases. ChatGPT could estimate staging time for scenes, differentiate between first-pass staging and later review rehearsals, and feed that information into a semester schedule that followed our institutional rules. It produced a complete semester staging plan efficiently and accurately.
As a Call Sheet Maker - Grade: D
I expected this to work well. It did not.
Calling rehearsals involves managing a large number of small, shifting variables: someone added to a scene late, someone present but silent, the need to shorten or extend a rehearsal period based on complexity, or availability changes. ChatGPT required all of this subtle, dynamic information to be defined with extreme precision.
While I am confident it could have done this with enough data entry, the time required to enter and maintain that information far exceeded the time it takes me to hold it mentally and generate calls myself. I tried this for one week, then returned to making calls by hand for the remainder of the process.
AI in Plot and Concept Development
As a Concept Generator - Grade: B+
ChatGPT could generate concepts at an astonishing speed. Many of them did not work. In this respect, it mirrors the human creative process. The difference was that ChatGPT generated ideas endlessly and without hierarchy or opinion.
This became one of my core discoveries: ChatGPT can produce the brilliant and the inane side by side and hold them in exactly the same regard. As a result, my role became less that of a collaborator and more that of a curator.
As a Scene Concept Generator - Grade: B+
This function behaved almost identically. I asked ChatGPT to summarize each scene and then offer three updated versions aligned with the new concept. I treated this like a choose-your-own-adventure exercise.
Often, one idea was compelling and two were unusable. Sometimes all three were mediocre. But again, ChatGPT presented them all with equal confidence and no evaluative signal.
As a Plot Overseer - Grade: D-
This was its single greatest failure in my experience.
ChatGPT lacks the ability to intuit emotional arc in a gestalt way. While it could generate compelling individual moments, it could not hold how Act I affected Act II, or how the most important emotional elements of the show needed to define the production as a whole.
As a result, the production struggled to “land the plane.” This is hardly unique in theater, and Handel’s operas are famously difficult in this regard. Still, even with ongoing human guidance, ChatGPT could not meaningfully engage with the deeper dramatic logic of the concept.
Educationally, the project was a success. Students had a vehicle for musical, theatrical, and intellectual growth. As a complete piece of theater, however, the show did not fully cohere.
AI in Script and Text Creation
As a Raw Translator - Grade: A
Here ChatGPT functioned like a super-charged Google Translate. Once I located a reliable open-source libretto and learned how to prompt effectively, it was remarkably efficient at gathering, translating, and presenting English versions of archaic Italian. Its main weakness was, again, confusion over editions.
As a Thesaurus - Grade: A
ChatGPT is an exceptional thesaurus. I could request multiple alternate phrasings, constrain tone, modernity, idiom, or syllable count, and receive results almost instantly. This was consistently useful.
As a Modern Language Constultant - Grade: B-
Here we encountered its lack of taste.
Quite obviously, ChatGPT lacks human "taste". In this context I mean both refinement or correctness, as well as the human ability to prioritize, discard, and care.
Over time, ChatGPT and I developed a shorthand phrase: “Influencer shimmer.” This referred to its tendency to sprinkle vaguely modern, influencer-adjacent language into text. Occasionally it landed perfectly. Just as often, it produced language that was trite or actively embarrassing.
Despite repeated attempts to define, refine, and stabilize this concept, ChatGPT would “drift,” creating a strange hybrid of antique and modern language that felt stylistically incoherent.
As a Script Reviser - Grade: B (with wide variance)
Revisions followed the same pattern: rapid generation, excellent rule-following, inconsistent taste. Some results were excellent; others unusable. This averages to a B, but in practice ranged from A to D-.
As a Text Setter - Grade: F
ChatGPT repeatedly suggested it could set text musically, with vocal considerations, and theatrically appropiate. It could not.
AI in Graphics and Visual Design
4.1 As a Graphics Creator - Grade: B
When I knew exactly what I wanted, ChatGPT could get me 80–90% of the way
there quickly.

For example, I wanted a distinct visual language for handheld iPhone footage versus green-screen footage. The phrase “Candid at the Curio” came to mind, visually inspired by the neon sign in the Catwoman scene from Batman. My first prompt produced terrible results. After refining my prompt language, ChatGPT generated something close enough to refine further.
This workflow—artistic idea → specific prompt → refinement → fast 90% result—repeated successfully several times.

When given vague or general prompts, however, ChatGPT produced low-quality graphics with no meaningful relationship to the production, even when given visual references or project history.
Student collaborators, by contrast, brought creativity and taste I do not have. They also brought human limitations: time, communication, and cost. From a production standpoint, the human creativity was worth it. From a resource-scarce standpoint, the speed and compliance of AI is already reshaping production practice across the country. I was transparent about its use in our production. Others are not.
AI in Technical and Computational Systems
Overall Technical Support - Grade: A+
This was ChatGPT’s strongest domain.
With a modest personal tech background and no external technical support, I built a system involving real-time video capture, green-screen manipulation, animated overlays, local Wi-Fi networking, and websocket-based inter-computer control. We had never done anything like this at Oakland.
ChatGPT never missed. The only technical error in performance was human, mine, actually.
Software Integration and Workflow Design - Grade: A+
ChatGPT suggested OBS, proposed the QLab-to-OBS bridge, walked me through websocket setup, wiring, troubleshooting, and even helped create a standalone Snap Camera build. Though I ultimately cut Snap Camera for practical reasons, the technical guidance was flawless.
Coding and Custom Effects - Grade: A+
ChatGPT wrote the HTML for floating emoji overlays used throughout the production, which I then adapted into numerous variants.
It also helped me write a Python script to create escalating static-interference overlays during a “stream failure” moment, following precise timing and density rules. The result was both effective and theatrically satisfying. A human might have done this more efficiently from a coding point-of-view. I don't know, I'm not a coder. But from a get it up and working with limited resources point-of-view, ChatGPT was exceptional.
General Reflections
When I knew exactly what I wanted, ChatGPT dramatically reduced friction between idea and execution. In a collegiate opera environment focused on student learning outcomes, many of those “90% solutions” are real wins.
The cost is artistic depth. Human time and collaboration bring taste, perspective, and meaning. Circumventing that process inevitably limits the art.
I also believe we are currently on the exponential portion of an AI development curve with a relatively low asymptote. AI will get cheaper and faster, but it will remain dependent on human meaning-making. It cannot feel, and therefore cannot create feeling in the same way.
On Negative Feedback
Some survey responses included hateful personal attacks. I want to address that directly.
First: I am an artist working in service of my students. I have every right to explore the world around me, translate it into art, and share that work. Shaming or bullying artists for work you dislike does nothing to foster creativity.
Second: Much of the criticism treated AI use as morally absolute. This is naïve. AI already shapes artistic life through algorithms, feeds, and predictive tools. Tools mediate art. Where one draws the line is personal and contextual. I don't expect everyone to have my exact point-of-view, but I do hope that artists will realize the issue is quite nuanced.
Finally: The criticism often assumed AI involvement is binary. This project explicitly demonstrated that it is not. Engagement exists on a spectrum. Students emerged with a more nuanced understanding of that complexity, which alone made the project worthwhile.
Would I Do This Again?
In some ways, yes. Certain uses of AI clearly benefited student learning by saving time and expanding capability.
This specific project, however, no. It demanded enormous personal energy, and while many responses were supportive, the hateful feedback was genuinely painful—especially given the transparency and care with which the project was conducted.
Conclusion
This project was a success for me and my students. We now have a more thoughtful, complex relationship with AI and artistic practice. I do not know where that relationship leads, but avoiding engagement with major social forces is not a viable artistic strategy.
If you have questions, I’m happy to discuss the project further.


Comments