Something ChatGPT Does Really Well (Mostly): Scheduling
- drakedantzler

- Sep 8
- 5 min read
Updated: Sep 10
Over the past 15 years of directing opera, I’ve found that a subset of my students needs an exceptionally clear, transparent rehearsal plan to feel confident as performers. While that level of clarity isn’t always part of the professional world, it’s vital in the collegiate one. In the spirit of this project, I handed the reins to AI and we dug in. I asked ChatGPT to help me move from our new, clean libretto to a working rehearsal plan: a scene breakdown, a count of musical numbers, timing estimates, and a mapped calendar.
How It Started
A clear constraint. I gave ChatGPT our Alcina libretto and a sample breakdown to match. I asked it to embed arias and choruses inside scenes (no “mini-scenes”) and to use only the libretto—no outside info. This last point is very important. One of the most troublesome—and most impressive—things about ChatGPT is the way it draws on sources and knowledge outside the boundaries of a project. In this case, if I hadn’t told it to constrain itself to our specific libretto, it would have mingled other versions and created a jumbled, useless breakdown.
Me: “Use the Alcina libretto I uploaded to build a scene breakdown like my sample. Embed musical numbers inside their scenes. Don’t use any outside information.”
ChatGPT: “Got it. I’ll mirror your format, scene by scene, using only the libretto. Arias/choruses will be embedded within the scene entries.”
It returned a cohesive breakdown: scene, location, characters, notes—exactly what I asked for.
When Excel Got Weird
The first try at turning that breakdown into an Excel file demolished my template’s formatting. This is another prompting lesson: you have to be very clear about inserting material into existing templates, and ChatGPT is not great at formatting. (Just wait for the entries on building a new script. Oof!)
Me: “Please put this into my Excel template and preserve the formatting.”
ChatGPT (first try): “Here’s a new sheet with your data.”
Me: “No—edit my existing sheet in place. Keep column widths, fonts, merged cells. Replace only the dates and clear columns B & D.”
After I spelled out “surgical edit,” it worked directly in the original workbook and preserved the layout. Lesson: don’t say “recreate”—say “edit in place.”
Counting the Music (Only What’s Not Recit or Chorus)
I asked for the total non-recit, non-chorus numbers (arias, ariosi, ensembles) based solely on the libretto’s markings.
Me: “Count all arias/ariosi/ensembles. Exclude recit and chorus. List them, then give me the total.”
ChatGPT: “24 numbers total,” followed by a clean scene-by-scene list.
Fast and useful—and it the count confirmed my choice for where to place intermission.
Creating a Dynamic Plan: Timing by Text Length

I wanted staging time estimates that scale with the length/density of the text, not a one-size guess per scene. ChatGPT parsed sections labeled in the libretto—[Recit], [Aria], [Arioso], [Coro], [Trio]—counted non-empty lines, and applied category heuristics (higher baselines for arias/ensembles).
Me: “Estimate staging minutes from the libretto text alone. Use different baselines for Recit/Aria/Arioso/Trio/Chorus.”
ChatGPT: “Done. First-pass total is 1,105 minutes (≈ 18h25m).”
Some recit rehearsal estimates felt very short, so I set a floor:
Me: “Keep all arias as is, but set a minimum 15 minutes for every recit.”
ChatGPT: “Recalculated: 1,375 minutes (≈ 22h55m).”
Those numbers were in the ballpark of what I would have estimated—and certainly good enough for a first pass.
Dates: The Moment I Rolled My Eyes
I asked to generally search the web in order to list our Tuesday/Thursday class dates; it guessed wrong from bad sources. Instead, I linked it to the exact webpage with the dates and it worked flawlessly.
Me: “List all Tue/Thu class days for Fall at Oakland University.”
ChatGPT (first pass): “Here they are…” (they weren’t)
Me: “Use this calendar.”
ChatGPT (second pass): “Updated—now matches the sheet.”
A note I’ve learned over and over: ChatGPT sometimes makes wild mistakes. It was off by more than a month on the first try!
Mapping the Plan to the Room
This semester I have 20 rehearsals × 1:45 = 35 hours. We mapped the 1,375 minutes of staging into those dates in score order, keeping each musical number completely inside a single class unless it’s rehearsal estimate was larger than a single class period, and honoring some constraints:
Start staging Sept 16
Skip Oct 14 & Oct 16 (memorized midterm exam)
Reserve Dec 2 & Dec 4 for work-throughs
As I scanned the output, two short recit scenes accidentally consumed a full block, I flagged it. I reminded ChatGPT, and it adjusted things. At this point we had proof of concept, so it was time to add complexity.
Dynamic Reviews That Scale With Reality
I wanted reviews at specific checkpoints (1.6, 1.11, 1.16, 2.6, 2.10, 2.13, 2.15, 2.18, 2.22), but not a generic length. We made them dynamic:
Me: “Insert reviews after those checkpoints. Each review = 30% of the staging time since the previousreview. Cap at 105 minutes.”
ChatGPT: “Added. Reviews total ~359 minutes (~6h). Staging + reviews = 1,734 minutes (≈ 28h54m).”
Plenty of headroom inside our 35 hours.
“Next Class, Please”

ChatGPT has strengths, but it does boneheaded things, too. Not being human, it initially put reviews on the same day as the scenes were staged. That’s not how it works, obviously, so it got another prompt:
Me: “Place each review in the following class, first on the docket. If there’s time left, continue staging.”
ChatGPT: “Done. Reviews now begin the next day, with staging packed after as time allows.”
It rewired the scheduler so every review lands at the start of the next rehearsal, and any remaining time that day goes to new staging.
The “Missing Scenes” Scare
At a final check, I thought a few scenes vanished (hello, 1.5). They hadn’t. The parser lumped recit → recit transitions into a single block, so the minutes were counted but the regeral label didn’t list them.
Me: “Why is 1.5 missing?”
ChatGPT: “It’s included in the previous recit block; I can re-split at scene headers if you want.”
Me: “Leave it. Timing is right; I just needed to know why.”
Transparency beats prettiness in a drafting phase.
Summary of the Process: Concrete Prompt → Response Highlights
Edit Excel in place (don’t nuke the format):
Me: “Open my sheet, keep formatting, replace dates, clear columns B & D only.”ChatGPT: “Edited cell-by-cell in your file. Formatting preserved.”
Set recit minimums:
Me: “Set a 15-minute minimum for recits; don’t change aria estimates.”ChatGPT: “Total now 1,375 minutes (22h55m).”
Dynamic reviews:
Me: “Review = 30% of material since last checkpoint; cap at 105.”ChatGPT: “Added ~359 minutes of reviews; combined total 1,734 minutes (28h54m).”
Reviews in the next class:
Me: “Reviews must happen next class, and go first.”ChatGPT: “Rescheduled so reviews open the following rehearsal; staging fills remaining time.”
Calendar constraints:
Me: “Start 9/16; skip 10/14 & 10/16; reserve 12/2 & 12/4 for work-throughs.”ChatGPT: “All constraints applied in the map.”
Reality checks:
Me: “Why did two short recits get a full block?”ChatGPT: “Even spread mistake; grouped them with the finale.”

What I Liked, What I Didn’t, What I Learned
Mostly clean and usable. The breakdown, counts, and timing model got me from zero to a working plan fast.
Fixable. The Excel formatting and the date fiasco were annoying, but solvable—once I gave precise, practical prompts (“edit in place,” “reviews next class”). The process was still faster than doing it by hand.
The interesting bit. Dynamic reviews scaled to music density felt like a real upgrade: proportional, easy to use, and automated enough to keep me out of the weeds.
In the End, Still Worth It
I ended with a schedule mapped to real dates, reviews that scale, and protected work-throughs—plus extra buffer inside our 35 hours. The debugging process is to be expected with a new AI partner: calendars, formatting, and workflow prompts. This one was a clear win, and I think it will be more efficient next time.
Onward.




Comments