Goldman Sachs analysts see generative AI impacting, if not eliminating, some 300 million jobs – Copyright AFP OLIVIER MORIN
Call it what you will, automated journalism has been around for a while, with much less hype. The original software was actually OK. Tacked on to first-generation AI, its creates a new and somewhat verbose world of information.
News Corp Australia runs 3000 articles a week in “hyperlocal” (niche regional) news media. These articles are oversighted by journalists, now called “data journalists”.
This is where writers are supposed to fearlessly agree with themselves and say it’s not the same thing as A-grade journalism, etc. ad nauseam. NO. I’m not going to knock the overall quality of the AI content. It’s reasonable. It’s not flashy or very expressive on its own, but it does the job.
Most things in the news aren’t A-grade journalism. They don’t need to be brilliant, just factual and properly spelled. This stuff isn’t exactly portfolio material for journalists, either. It could be written by a toaster for all anyone cares.
This is where the other alleged argument kicks in about removing drudgery from journalism. It’ll never happen. Consider the subject. News about Homo Sapiens tends to suffer qualitatively by association with that fun-filled cotton bud of fun, Homo Sapiens.
OK, so what IS the problem?
There are multiple quality controls on the information. Editorial positions are whatever they are, as usual. It’s an automated version of the same old informational meat grinder, right?
No. Letting the AI equivalent of the Babes in the Wood out into cyberspace has long since shown a few actual serious risks. Never mind the conspiracy theory racket and banal hysteria. It’s totally dependent on whatever mishmash of data is available.
The problem is where AI sources its information. It has the capability to process so much information of whatever quality. About 5% of all data entered is wrong in some form, remember? Between the disinformation industry and inexcusable inefficient Couldn’t Care Less R Us Search Engines, AI-driven or not, is a large, unworkable, and totally untrustworthy credibility gap.
Your news has another quality control. You. Your knowledge base has to deal with the information, disregard, read, and process, this potential slop. AI isn’t doing a very good job of that itself. This is a Bing search for example AI journalism. It repeats the very same headline 6 times, from very different sources, including MSN and Sky. With identical sub-heads. Not impressive.
Well, so what, you ask? That’s a lot of utterly useless, repetitive, very off-putting search results, is what. You can see the inefficiencies built in to a very simple search with three search terms which are totally unambiguous.
The search extrapolated and contextualized the search, which would be OK, except that result wasn’t what I was looking for at all. I didn’t need Encyclopedia Britannica. I just wanted examples of AI journalism. I did NOT want Prophecies from the Great Bot. The context became wrong automatically.
This is also an absolute baseline function for searching anything, let alone a large language database. Never mind the nitpicking about search filters, etc. That IS how people generally search. Simple terminology, on topic. Is that incomprehensible? Apparently, it is.
One look at that lot, and I couldn’t be bothered looking anymore. The results already look very wrong, even if they’re packed with wholesome enriching informational goodies. They weren’t. The repetitive ones were also very brief with a few links.
You can see how “search irrelevance creep” might be a problem as this mess evolves. The absolute rock bottom line here is that AI can’t and shouldn’t do some things. It’s a technological toddler out of its depth at the moment.
The working state of any reliable tech is the result of fixing the bugs. If you want to use AI for journalism, be aware of these issues at the baseline.
… Which leads to this additional gem of wisdom – If you want insights, you need people.