RemAIning Human
RemAIning Human
Your AI model choice is your voice
2
0:00
-50:23

Your AI model choice is your voice

ChatGPT vs. Claude vs. Gemini: a Stanford researcher shares why conscious LLM choices matter
2

In a hurry? Skip the below and just listen in by hitting play above. For even easier listening, this podcast is also available on Spotify and Apple Podcasts.

When ChatGPT was released by OpenAI in late 2022, the world woke up. Public discourse around artificial intelligence — previously confined to the inner workings of the tech companies building it — began in earnest, and our society was catapulted into an exciting and unknowable human-AI future.

In the following months, ChatGPT became the most well-known large language model (LLM) in the market. Similar competing LLMs followed, but ChatGPT’s first mover advantage cemented it as the go-to generative AI tool for many.

But as the AI landscape has evolved, so have our choices. We can now choose from a variety of models for a variety of tasks. And yet, despite the multitude of models at our disposal, it’s easy to reach for the tool that’s simply open on the browser without considering our objective — and if that model is the model that best meets that objective.

That’s where new research from Stanford researcher Vasyl Rakivenko comes in. Vasyl’s research uses a three step process to guide AI users in choosing the model that best fits their objectives — helping us to make more informed choices that will inevitably lead to more intentionality in our use of AI. You can view a snapshot of this research here.

Because not every little task should be plopped into ChatGPT. There are times when using an open-source model (such as Llama-3) might be a better option, for example. There are times when a less complex task might be easily handled by a smaller, more light-weight model — and consume less energy in the process. And, there are times when it makes more financial sense to use one model over another — when using a free model gets you the exact same result as a paid model, for instance.

In this episode, Vasyl breaks down the key differences in competing LLMs including their varying strengths, environmental impacts, costs, and ethical considerations. In doing so, he highlights how our individual choices lead to very real collective outcomes — and, ultimately, influence the development of more responsible AI technology.

Listen in to learn:

👉 Why your choice of which LLM to use (and when) matters. Different LLMs have varying strengths, environmental impacts, and safety consideration — all factors that must be considered before you type in your prompt and hit ‘enter.’

👉 Why bigger AI models aren't always better — and how smaller models can often handle simple tasks while using less energy and computing power.

👉 How current AI bias issues, if not addressed now, will likely carry forward into future AI agents and applications.

👉 About how you can wield your power as a consumer of AI products for good. The models that are used most will influence which AI companies succeed — and shape the industry's future. It’s up to us to be aware of the differences between LLMs, and to make conscious choices around product use that most align with our values.

Because as consumers of AI products, our choices make a meaningful difference. The products we choose to use are more likely to be successful, and the companies that produce those products are most likely to be funded and to succeed.

In this way, your choice is your voice.

And as civil society holding tech companies to account, we need to ensure our individual voices are heard.

Until next time,

Cecilia

Vasyl Rakivnenko is the CEO of IngestAI Labs and a Responsible AI researcher who partners with Stanford Institute of Human-Centered Artificial Intelligence (HAI) and top Stanford faculty to produce breakthrough research on how we can build, deploy and use AI more responsibly. He's also a Stanford MBA grad, angel investor and Forbes-featured tech entrepreneur who has been building AI solutions since 2020.


RemAIning Human is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.


Building community around ethical, responsible AI

Know someone whose interested in learning more about AI? Forward on this email or click the button below to share.

Share


RemAIning Human is written and edited by Cecilia Callas.

Discussion about this podcast

RemAIning Human
RemAIning Human
A Responsible AI podcast dedicated to exploring how humans might live in harmony with artificial intelligence. Remain calm, remain human.
Listen on
Substack App
Apple Podcasts
Spotify
RSS Feed
Appears in episode
Cecilia Callas