Amidst the Gen AI Boom, Three Key Requirements Appear

It is the week of CES. The rumble you hear s the boom of artificial intelligence, especially generative AI. 

 A visit to the Las Vegas Convention Center (and the Venetian) will be like listening to the final strains of the 1812 Overture while seated next to a Napoleonic cannon.

Booooooooooooooooooom.

 There is the thunder of new models, ever-larger models and almost unimaginable numbers of parameters[1]; the roar of $27 billion invested last year in generative AI start-ups (in an otherwise frosty VC year) [2];  and the reverberations of OpenAI monthly revenue climbing to a reported $133 million, $1.6 billion annual run rate.[3]

 Boooooooooooooooooooom.

That being said – and as our heads ring (and as germs are shared and feet go numb) from time at the LVCC – it’s probably also time to begin to separate the generative AI signal from the noise.

Separate the generative AI signal from the generative AI noise.

History teaches that, in every technology boom, a sustainable value proposition will emerge from the needs, interests, and issues of prospective users. The users who, if needs, interests, and issues are addressed, will sign purchase orders totaling not just $133 million per month, but $1.6 billion and more per month.

Despite the noise, let us not forget that we are in phase 1 of the generative AI revolution.  We are just getting started.  That means user needs, interests, and issues are just beginning to emerge, cautiously, like the first shoots of spring daffodils peeking out above the soil.

And with those, the emergence of what might become a trinity of user requirements.

A trinity of trust.

Respect for intellectual property.

Attribution and citation.

Accuracy.

The first is respect for intellectual property.

It is the big one.  With serious legal and economic implications. (Are LLM’s thinly disguised plagiarisation algorithms?  If OpenAI is required to pay for content, will it devolve to a techno WeWork?[4])

Most of the commentary on this topic this year will no doubt center on the suit filed December 27 by the New York Times against OpenAI in the Federal District Court of Manhattan[5], alleging unauthorized use of published work to train AI.

(At first glance, I’m sympathetic to NYU professor Gary Marcus’ take today on the suit and its echoes in the UK).  

But the use of copyrighted material is but one of the layers of this issue.

Second, and related to the first, are the issues of attribution and citation. Those of us who aspire to professional status – be it in law, medicine, academia, consulting, marketing, biz dev, you name it – build  our work on a deep foundation of previous and independent third-party research and engagement. 

Attribution and citation is a bedrock of what we do.

Attribution and citation are key reasons why thought leaders within a leading U.S. West Coast university are now exploring generative AI value propositions[6] with Perplexity AI, the “answer engine” that now claims 10 million monthly active users (and closed a $73.6 million funding round.)[7]

And no surprise. According to generative AI analyst Bret Kinsella of Synthedia, Perplexity users “get instant, reliable answers to any question with complete sources and citations included.”[8]

The third, and critical for the actual implementation of a generative AI solution in a clinic, a lab, a law office, a university library is that of reliable and trustworthy accuracy.

This superb paper in Healthcare outlines the many value propositions for generative AI in healthcare, with potential impact at every step of what is a natural language-centric patient experience, from assessment to the care plan and outcome evaluation.

Yet, the authors hesitate, and appropriately so.[9]   They note that the technology is in its early days, and that healthcare (like other professions) is knowledge-intensive, in that it requires a) specific knowledge in a particular domain, b) real-world knowledge, and c) expertise gained over time. (It is why physicians go to medical school and do post-graduate residencies. And why you want to look up at an experienced veteran from the operating table).

Given that, healthcare LM output must consistently reflect clinical and/or scientific consensus.

It must be accurate, in an every-day, sitting-down-with-my-doctor definition.

 

With some 35 years in the enterprise world (sales, business development, consulting), I’d argue that the requirements for business are no different.

Respect for intellectual property?  Damn right.  The threat of a lawsuit is less imminent than the loss of professional credibility, the perception that you’re a thief or a fraud, pretending that someone else’s work is yours.

Attribution and citation? It goes with the first.

Accuracy, built on exact data, time-tested sources, and reliability of response? Why would you want the alternative?

 

I’m Jon Stine, 35+ years in business and technology. 

I read, I write, I advise.

Jcstine1995@gmail.com, +1 503 449 4628.


[1] “The Best Large Language Models in 2023: Top LLMs,” November 23, 2023, https://www.uctoday.com/unified communications/the-bet-large-language-models-in-2023-top-llms/#

[2] https://siliconangle.com/2023/12/27/pitchbook-tech-giants-invested-generative-ai-startups-vcs-year/

[3] The Information, as cited in Bret Kinsella, https://synthedia.substack.com/p/openais-revenue-climbs-to-133M-per

[4] “The desperate race to save Generative AI,” garymarcus@substack.com, January 8, 2024

[5] https://www.nytimes.com/2023/12/27/business/media/new-york-times-open-ai-microsoft-lawsuit.html

[6] Private correspondence with the author, December 2023

[7]https://synthedia.substack.com/p/perplexity-hits-10-million-maus-and?utm_

[8] Ibid.

[9] Ping, Hua, Xia, and Chao, “Leveraging Generative AI and Large Language Models: A Comprehensive Roadmap for Healthcare Integration,” Healthcare 2023, 11, 2776 (https://doi.org/10.339/healthcare11202776)

Previous
Previous

An AI Lesson From the Farm: It's All About the Usage

Next
Next

Loud Steps and Quiet Steps (Copy)