

Over the past few months, the team at Hearst Newspapers has been experimenting with AI tools to create “Connecticut Today”, CT Insider’s weekday audio news briefing.
At first the team thought they could summarize a few stories, top with a quick introduction and finish with a closing sentence or two. Those initial scripts were fine but not really something you’d want to listen to daily to get caught up on the news.
Determined to improve the product, team members from across Audience, Editorial, AI and Engineering got to work perfecting an audio script prompt that uses a daily selection of top stories to deliver an output intended for listening. The script is then fact-checked and edited by humans, read by an AI-generated voice powered by Everlit, and reviewed by the CT Insider newsroom to ensure pronunciations land correctly. The entire workflow takes a little over an hour each day.

We spoke to Ellie Miller, Director of Audience and Engagement, and Derrick Ho, in charge of AI and Editorial Strategy, on the strategy and processes behind this project
Madeleine: Thank you both for your time, and congratulations on this project! Let’s start from the beginning: what was the motivation behind launching this project?
Ellie Miller: Thank you! We were looking for a way to produce a new kind of audio product without a huge investment in time and resources. We wanted to see if we could create a daily audio briefing that was low-lift in terms of workflow and production while gaining the habit-forming benefits. The particular market we started with hadn’t had a regular daily audio briefing before, so it was the perfect place to test.
Derrick Ho: When Ellie and others first came to me with the idea, it was clear that a large language model (LLM) would be perfect for this. It could summarize our articles and convert them into a format that’s easy to listen to, which is a different skill set from traditional writing (so it’s not as easy for print or text journalists to do). Our initial thought was to use ChatGPT, but we quickly realized we needed something more integrated.
M: How did you go from that initial idea to a working product? Can you walk us through the operational steps?
Ellie: We needed a way to automate the process as much as possible. Our goal was to produce a daily briefing in about an hour to an hour and a half, including fact-checking and editing.
Derrick: We started by experimenting with a prompt for the LLM to draft the script before refining this prompt to structure the audio briefing effectively. We also drew inspiration from local TV newscasts, where they often tease the weather or sports segments at the beginning to keep viewers engaged.
For the production side, we were already using a tool called Everlite, which we used to create audio versions of our articles for accessibility. We worked with Everlite to see if we could adapt their tool to handle a multi-story audio briefing, adding features like pauses and sound effects. This has been very effective and allowed us to benefit from an existing tool while adding our own workflow around it.
Ellie: Once the script is drafted and fact-checked, we feed it into Everlit to generate the audio. This process has allowed us to significantly reduce the production time compared to traditional audio production.

M: What is the wider business strategy behind this feature? Is it focused on audience acquisition and engagement, or is it part of your subscription model?
Ellie: Right now, the immediate goal is to see if we can build a habit-forming product. We want to test whether an audio briefing can increase user engagement. We’ve also had some ideas about possibly integrating it into our subscription model, perhaps by offering a full briefing to subscribers or using it as a teaser for paywalled stories to drive traffic.
Derrick: One of our big goals is to expand this project to other newsrooms and personalize it for different regions. Connecticut, for example, has dozens of small towns, each with its own unique news needs — can we scale this technology to create tailored audio briefings for each of those towns? We’re also exploring how to make the briefings dynamically updated, so a listener can get a personalized summary no matter when they tune in.
M: This project is a great example of a collaboration between editorial, product, and AI teams. What does success look like for this project?
Ellie: Success for us is proving that this is a viable and scalable product that can be expanded to other newsrooms. We’re also very interested in using the audio clips as social media content to drive further engagement.
Derrick: On a personal level, success is about showing how we can use AI in our editorial products. We’re still working on reducing the amount of human intervention required for the briefings. We have an internal tool, which we call “Producer P,” that drafts the script, but this is then checked by our internal teams. One thought inspired from another project was if we could use an LLM to evaluate the output of another LLM, comparing the summarized text against the original article to flag any potential errors. This allows us to maintain quality while still increasing efficiency.