Share this content
Save content
Have you found this content useful? Use the button above to save it to your profile.
All aboard the hype cycle of AI | accountingweb

All aboard the hype cycle of AI


AccountingWEB’s editor at large, John Stokdyk, abandoned his stance of detached neutrality when AI bots started sending out promotional press releases.

27th Apr 2023
Share this content
Save content
Have you found this content useful? Use the button above to save it to your profile.

When your stock in trade is to be a “seen it all before” cynic, it’s difficult to adjust to circumstances that you haven’t seen before, like global pandemics, or generative artificial intelligence (AI) that can create convincing simulations of human content.

The trigger point for my discomfort is the latest iteration of OpenAI’s language engine, ChatGPT-4. As we read recently, the bot achieved a pass mark in ICAEW accountancy exams and has successfully completed a range of law and medical licensing tests.

I can play the world-weary old-timer on this score. Chatbots featured heavily in AccountingWEB coverage as far back as 2016, when Unit4 launched its Wanda app. Sage responded with its bot Pegg, followed swiftly by Xero and QuickBooks. No business app was complete, it seemed, without a companion digital assistant. Yet where are they now?

The flash-in-the-pan experience is typical of the technology hype cycle devised by industry analyst Gartner, which currently shows generative AI to be nestling just below the famous peak of inflated expectations.

Breaking the cycle

But could the latest generation of AI confound the experts and disrupt established technology adoption patterns? The point at which my wry amusement curdled into existential dread was when I received a press notice from an organisation called Newsmatics proclaiming that it had created an AI-powered press release generator

It’s not so much the potential loss of professional standing and employment opportunities that bother me, it’s the prospect of being swamped by mountains of bot-generated guff alongside the promotional deluge we already get from human PRs.

As many commentators have pointed out, ChatGPT and its ilk are not the founts of profound knowledge that Douglas Adams imagined with the Deep Thought supercomputer in Hitchhiker’s Guide to the Galaxy. 

Instead, they digest huge quantities of digitised language and use machine learning pattern recognition to match recurring phrases that have been deployed in answer to similar queries before. ChatGPT-4 doesn’t “understand” the content it’s producing, it compiles a convincing sequence of words to meet the specified inputs. To this end, the superbot will occasionally invent its own bogus quotes and citations to make the content look more authoritative. Have a look at Bright Group’s ChatGPT-4 2023 Budget predictions post for an instructive example.

ChatGPT-4 may pass accountancy and law exams, but it can’t understand the client’s situation, interpret their desires and formulate a technically correct, ethically sound path for them… yet.

Self-perpetuating flannel

Returning to the press release generator, Microsoft has put $10bn into OpenAI and Google’s parent company Meta is also staking out its claim on this territory. Both companies search engines use machine learning to rank online search results. It isn’t hard to imagine these systems responding more positively to content produced from closely related source language models to crowd out more meaningful human insights. Thanks to generative AI, we now face the prospect of being deluged by a torrent of drivel on an incomprehensible, industrialised and self-perpetuating scale.

Sorry if that sounds apocalyptic, but the paranoia is based on more than 20 years’ experience of Google’s prejudiced search optimisation algorithms on AccountingWEB.

I’m not the only one to feel this creeping unease. In a recent rumination on, early mover and “State of AI” report author Ian Hogarth voiced his fears around the potential capabilities of what he calls “God-like AI” (or AGI – artificial general intelligence – as it is known in the trade).

Hogarth has been backing AI tech companies since 2014. Along with 1,800 other signatories including Elon Musk, Apple co-founder Steve Wozniak and scientist Gary Marcus, he put his name to a public letter calling for a six-month moratorium on AI development to assess the underlying risks and ethical concerns.

Shoggoth with a smiley face

Their main fear has been aired many times before that the speed of technology development in this area is racing ahead of social, environmental and regulatory responses. “Consequential decisions potentially affecting every life on Earth could be made by a small group of private companies without democratic oversight,” Hogarth wrote.

The article is illustrated with a “Shoggoth” image, where the public-facing toy of ChatGPT-4 is represented as a smiley face being manipulated from behind by a giant, slobbering golem. The monster represents the giant technology companies that have absorbed the most cutting-edge AI developments into their growing empires. 

There might be a hint of personal interest in my stance, but the information era has seen a marked increase in economic inequality in favour of global tech giants. The companies that prospered from this shift do not have the interests of wider society at their hearts. As well as unleashing all sorts of unanticipated consequences, my recurring fear is that ChatGPT and its AI descendants could become the vehicles for another, even more damaging wave of monopoly control and exploitation.

Editor's note: The paragraph on ownership of OpenAI has been corrected in response to the error pointed out by Paulwakefield1 below.

Replies (7)

Please login or register to join the discussion.

By paulwakefield1
28th Apr 2023 06:59

"OpenAI’s parent company Meta also owns Google"

I haven't looked into the ownership but I can't believe Microsoft would be so tied in with OpenAI if Meta were the owners. And I don't think Meta own Google!

Thanks (0)
By JustAnotherUser
28th Apr 2023 08:15

was this article written by a human?

According to open ai (I asked it) As of September 2021, OpenAI is owned by its investors, which include Microsoft, Reid Hoffman's charitable foundation, Khosla Ventures, and LinkedIn co-founder Allen Blue, among others.

I did ask it "who is John Stokdyk" ...first pass it said...

"I'm sorry, but I could not find any well-known person or public figure with the name "John Stokdyk" through my search. It is possible that this individual is not a public figure or has not gained significant recognition in their field"

Adjusted to "who is John Stokdyk from accountingweb" and it gets a bit better... not so smart this chatgpt thingy as searching just your name in google gives a better answer.

"John Stokdyk is a journalist and editor who has worked for AccountingWEB, a website that provides news and information to accounting and finance professionals. Stokdyk has been with AccountingWEB since 1999 and has served as editor since 2006. He is known for his coverage of developments in the accounting and finance industry, as well as his insights on topics such as technology, practice management, and professional development for accountants. Stokdyk has also contributed to a number of publications outside of AccountingWEB, including the Financial Times and the International Accounting Bulletin."

Thanks (1)
John Stokdyk, AccountingWEB head of insight
By John Stokdyk
28th Apr 2023 09:45

@paulwakefield1 - You are entirely correct and the mistake is down to poor fact-checking on my part. I'm really sorry for letting you down and to colleagues who would expect better of me.

I spent a fair amount of time pursuing various strands of thought down Google wormholes and at one point confused OpenAI with something I imagined was called "OpenMind"... Convinced that this was a Google subsidiary, that's what I typed when it would have taken 30secs to clarify the issue, as you have done.

That said, Microsoft has put $10bn into OpenAI, so I think I can stand by my contention that generative AI and further developments will end up in the hands of megatech corporations.

As an aside, I asked ChatGPT for its analysis of this point and got a classic, bland summary of the two sides of the argument: "On the one hand, technology corporations have the resources and expertise to drive innovation and bring new AI-based products and services to market. They can also use their power to invest in research and development and to promote collaboration within the scientific community, which can benefit society as a whole.

"On the other hand, the concentration of power in the hands of a few corporations can stifle competition and innovation, limit consumer choice, and lead to ethical concerns around privacy, bias, and fairness... Ultimately, the future of AI and its role in society will depend on how it is developed, deployed, and regulated."

@JustAnotherUser - thanks for saving me the trouble of vanity surfing my own name on ChatGPT. All that info is out there on the search engines, but as I suggested the bot is not above a bit of fabrication. To my knowledge, I have never contributed to the International Accounting Bulletin.

Thanks (3)
By Justin Bryant
28th Apr 2023 15:49

It's not hype. Look how far technology has advanced in the last 100 years and then extrapolate 10, 20 or 30 years from today. As Arthur C Clarke once said (paraphrasing), if we met an alien intelligence it's likely their technology would be indistinguishable from magic.

The very existence of the human brain in the first place (that no-one understands properly or at all re consciousness etc.) shows there's no hard-edged physical limit here.

Thanks (0)
Replying to Justin Bryant:
By Justin Bryant
02nd May 2023 09:42

This AI expert bloke seems to agree with me.

Thanks (0)
By Joggingaway
28th Apr 2023 16:55

Meta doesn’t own Google. Alphabet owns Google. Meta owns Facebook. I think GPT would have done a better job getting this right.

Thanks (1)
By Hugo Fair
28th Apr 2023 20:18

One of the inherent weaknesses of humans (one might almost think of it as a design flaw) is the default desire to believe (most of) what we're told.
There are good evolutionary reasons for this ... it consumes less energy than arguing and it helps to cement group ties ... but it has left us susceptible to demagoguery, for which there are many examples of the danger (Hitler, Stalin and Putin to select a few of the more obvious)

Amongst the problems with which leading AI scientists are currently wrestling are what might be termed the 'plausibility factor', where GPT is beholden to that measure well before any concept of truth is considered.

It's probably worth mentioning that GPT stands for generative pretrained transformer ... which in plain English means it's 'trained' on (as in it reads) almost limitless volumes of text in order to make the statistically most likely 'predictive' guess as to the most appropriate word to generate next in its output (aka transformed text).
At no point does it have an understanding of the topic, let alone any implications, so its main measure is plausibility of the end result - making it akin to an untutored 3-year old with demagogic powers ... frightening, eh?

And then, as you mention John, lurking in the background are the great white sharks of the IT sector ... ready & willing to manipulate the diet (of reading matter) fed to their pets. There's already a mountain of evidence of how this has unintentionally reinforced racial stereotypes and led to the opposite of diversity in facts & opinions - but there's no way, as it stands, that we'd know if any of that was being done deliberately (political interference anyone?)!

Thanks (3)