Skip to content Skip to footer

Innovating with AI, the impressive work of Aftonbladet

When it comes to innovating with in the , , a media group based in Oslo, shows the world what can do (see this story). Among their innovations is creating an LLM for media in Norway. In an episode of AI Inside's podcast, Jeff Jarvis states: “Schibsted is, and I do not exaggerate, the most admired news company and cause of the most jealousy in the news industry in the world because they've been successful and they figured out the internet better than any other company I know of online.”

I caught up with Aftonbladet's Martin Schori in June, a few days after the EU elections. Martin is deputy managing editor for Aftonbladet, one of Schibsted's legacy media outlets in Sweden. I was particularly curious about their chatbot, Valkompisen, launched after an internal hackathon for the elections to provide fact-checked answers about the elections to their users. Leveraging AI for elections proved to be a good bet, driving a significant amount of .

Hi Martin! Thanks for taking some time to talk with me. As you know, many publishers have embarked on the AI journey over the past 2 years, bringing AI into their newsrooms. I was curious: what's the goal in your own newsroom?

Martin: In October, we started our AI hub. It's now seven people working full time: four journalists, two developers, and one designer.

The idea was to make fast progress. We have a large product and , so many ideas from the newsroom had to be prioritized against other projects. It was often hard to get them through because there were so many big projects. So we decided to build something on top of the organization. These seven people started with education, training the rest of the organization, and developing and implementing tools for internal use. Now we're more focused on services and products for the audience.

> Also on AI: Bringing AI to a 400 year old media group

When we think about AI, there's so much that can be done. We can work with internal tools, workflows, or user-facing tools. So, how do you prioritize?

There's a lot you could do on the business and marketing side, but we haven't gotten that far yet. We started with the newsroom. Initially, it was about experimenting, and it was easy to experiment with editorial tools like headline generators and proofreading. Now we're prioritizing services for the audience. Our AI-generated article summaries have been a huge success.

That's exciting! I'm going to age myself here – but when I think about the time I spent transcribing interviews as a reporter, I can only imagine what a huge time-saver it is. However, I can see how sensitive it is to use transcription tools with confidential material. Was protecting your sources and data the incentive for developing this tool internally?

Yes, exactly. That's why we built our own transcription tool. You can upload interviews, but you can also use it live. If you have a press conference, for example, it can transcribe live for you. Obviously, we want to be able to upload sensitive data safely.

I'm very curious about the success of AI summaries. It's one of the use cases many newsrooms are embracing now. Can you tell me more: does it drive more audience, or cannibalize it?

Indeed, it doesn't cannibalize – quite the contrary. But if that had been the case, we should offer it anyway because if that's what the audience wants, they should have the option. I think the idea that a journalist decides 100% how the audience will consume the news is old school. Now that technology allows it, we can give the power to the audience to decide how they want to consume it – read, listen, watch, read bullet points, or read half and listen to the rest. That's a great development for us.

The idea here was to do something with generative AI pretty quickly. Our product team decided on generated summaries, and it turned out to be a very good service. We had no idea how people would use it. We thought those who expanded the summaries would only read them and then go somewhere else. But we realized that almost half of everyone who saw the summary went on to read the article. Those who did tended to read the articles deeper than those who didn't expand it. That surprised us. At first, we thought something was wrong with the data, but maybe it's not that strange. For example, when you read a research paper, you get an abstract first, which gives you a good understanding of what you're going to read. Maybe we should replace our leads with summaries.

I'm curious to know more about the elections chatbot, as you've said it's a huge success. How did you build this and how did the audience interact with it?

That was one of our biggest bets for the year. We had to convince our management team to focus on it for the elections, which is typically not a very engaging news event for the audience. But we thought it might be good because we could test it for elections and then scale it up for other events. But it actually turned out to be very successful – we had 160,000 questions. We categorized and checked them all. We could see that some questions were malicious, but most were actually real ones.

We didn't just build a sandbox that goes out on the internet and finds information. On the contrary, the bot can only use data that we put in: official data from the European Union and all the political parties. We also did a lot interviews and surveys with each political party. So it's actually journalism. It's a very good model. Everything is safe and secure, and it's also instructed not to hallucinate. If it doesn't have an answer, it has to say that. Up until now, we've gone through thousands of questions and haven't seen any examples of hallucinations.

I think you're touching on something very interesting. There are obviously trust issues when it comes to generative AI. Fact-checked content, whether it's numbers or even interviews, is really valuable for a news organization. And it can be served to the audience in many ways.

I think so too. It's a bit scary because in the newsroom we usually have control of every single number we publish. With the bot, we didn't know exactly what it was going to answer. I think we should be careful not to use AI just for the sake of using AI. There's already news fatigue and a lot of content out there. We shouldn't use generative AI just to create even more content.

Thanks Martin!

This piece has been written by Anabelle Nicoud