LONDON—Conversational artificial intelligence (AI) tools may soon “covertly influence” users’ decision making in a new commercial frontier called the “intention economy”, University of Cambridge researchers warned in a paper published Monday.
The research argues the potentially “lucrative yet troubling” marketplace emerging for “digital signals of intent” could, in the near future, influence everything from buying movie tickets to voting for political candidates.
The increasing familiarity with chatbots, digital tutors and other so-called “anthropomorphic” AI agents is helping enable this new array of “persuasive technologies”, it added.
It will see AI combine knowledge of our online habits with a growing ability to know the user and anticipate his or her desires and build “new levels of trust and understanding”, the paper’s two co-authors noted.
Left unchecked, that could allow for “social manipulation on an industrial scale”, the pair, from Cambridge’s Leverhulme Centre for the Future of Intelligence (LCFI), argued in the paper published in the Harvard Data Science Review.
It characterizes how this emergent sector — dubbed the “intention economy” — will profile users’ attention and communicative styles and connect them to patterns of behavior and choices they make.
“AI tools are already being developed to elicit, infer, collect, record, understand, forecast, and ultimately manipulate and commodify human plans and purposes,” co-author Yaqub Chaudhary said.
The new AI will rely on so-called Large Language Models — or LLMs — to target a user’s cadence, politics, vocabulary, age, gender, online history, and even preferences for flattery and ingratiation, according to the research.
That would be linked with other emerging AI tech that bids to achieve a given aim, such as selling a cinema trip, or steer conversations towards particular platforms, advertisers, businesses and even political organizations. AFP