“Every time a free tool goes freemium or a platform closes because it can’t make money, we’ve got an issue. What’s going to happen to the stories that tool feeds?”
A better understanding of the value of owning your digital assets — and having an exit strategy from a tool that could disappear — will only get more crucial over the next year.
Our ability to prove our work in a time when the accuracy and veracity of our output is being challenged is something every newsroom leader, and journalist, should be thinking about. And gaps in our interactives and story elements aren’t going to help us with that at all.
At the time we were writing this prediction, an email came from Google telling us Google Fusion Tables was being closed down — or “turned down,” as they nicely put it. The tool had reached the grand old age of 9 and Google says that they’ve developed more suitable tools during that time.
They’ve made it clear how to get your data back and will be adding Google Fusion Tables data to the Takeout tool early next year to allow a user to export all of their data at once before the final shuttering next December.
That’s useful, but it will still lead to the demise of a large number of interactives which have been embedded in news stories — its own form of link rot. And it’s also going to hit the training of the next generation of journalists, because its simplicity made Fusion Tables a good introduction to data for student journalists. Many educators, including us, have used it in classes, and Fusion Tables visualizations have ended up in young journalists’ portfolios.
This isn’t a new issue: Every time a free tool goes freemium or a platform closes because it can’t make money, we’ve got an issue. What’s going to happen to the stories that tool feeds? It also affects the digital memory of the news communities we serve. We saw it with the closure of Storify — a great way to thread social media content, yes, but a great big hole in a news page when it died.
This embed death is something that we’ve been thinking about for a while. We ran into a problem while doing some social media research back in 2014 when ScraperWiki’s API access was suspended. We’d already started our work, and that made us start to seriously think about exit strategies, saving content, and both dead links and dead code.
Here at Cardiff University, we’ve been teaching journalists to code as part of the syllabus since 2013 when we launched our MSc in Computational and Data Journalism. We get our students to use version control software to archive and maintain their projects. It’s about time we did something similar with the contents of our stories, particularly those based on third-party tools.
There are two key issues at play. The first is the danger of the magpie approach to journalism innovation. Julia Posetti of Oxford’s Reuters Institute for the Study of Journalism has looked at journalism’s fascination with “bright, shiny things” and its implications for sustainable development. And it’s true — lots of conversation at industry conferences is around how we can use a certain tool to deliver a new experience, often without worrying much about its potential lifespan.
The second is how reliant we can be on free tools, and how we need to plan for proper archival and ensuring content that is in our long tail of clicks stays useful. Again, this isn’t a new issue, but there are some great ideas from Meredith Broussard on techniques to work with in the new beta version of the Data Journalism Handbook.
And what about information that isn’t stored digitally? Leaks in institutional memory aren’t a new thing; whenever a seasoned veteran leaves a newsroom, there’s a loss of information that can’t easily be recaptured. Is there an opportunity here for turning one of our latest shiny things — machine learning — loose on our archives to create a proper digital newsroom asset?
Here’s hoping we see more thought about this area being put into action across next year.