Artificial Intelligence

No, ChatGPT is NOT making you stupid.

Sepupus, the internet has been abuzz of late because of a new MIT study called “Your Brain on ChatGPT”.

All around on Reddit and the internet, people are starting to form wild conclusions, read patterns in the stars, decide unilaterally or with the agreement of some people out there and everywhere, that somehow now people are being made stupid and MIT researchers have said that it is so and therefore it must be true.

I find it interesting and fascinating.

Now, in what way is this related to economics if at all?

Well, artificial intelligence is a very important part of our economy and it will continue to be important for the foreseeable future, as it shapes and reshapes the economy and how we treat human capital in ways that are intuitive and sometimes unintuitive, in ways more subtle and interesting than the standard narrative of robots replacing human beings may suggest.

It’s interesting to think about it and how it’s going to affect the way that we can live and work in this world which is ever-changing and continually evolving. With that in mind, here’s my perspective.

I do not generally think that ChatGPT is making us stupid.

I read the MIT study earlier, and I broadly understand the way that it is constructed.

You can have a look at it here.

Link: https://arxiv.org/pdf/2506.08872

Basically, what they did was that they asked participants to write SAT-style essays across three sessions chosen from a range of choices in three different groups:

1. One purely using their brains

2. One using Google

3. One using ChatGPT

Then, they had some participants come back for a fourth session where they swapped people from one group to another — 18 people did this in total.

Now this is what ChatGPT says, in summarizing what happened:

(AI generated – also, as a full disclosure, I do use AI-generated content on this website once in a while; consider this a disclosure that you may see AI generated content here once in a while, although I affirm that I will curate it to ensure that it is high quality and it is accurate and matches experience. I hope you won’t mind as what matters more I think is the specific choice of what to show to you rather than the question of whether the content is generated by AI or if it is not!)

What the Study Did

The researchers wanted to understand how using ChatGPT-like tools (called LLMs, or large language models) affects your brain and your essay writing.

They divided participants into three groups:

  1. LLM group â€” people who used ChatGPT to help write their essays.
  2. Search Engine group â€” people who could use Google to help them.
  3. Brain-only group â€” people who weren’t allowed to use any tools; they just used their brains.

Each person wrote three essays under their assigned condition.

In Session 4, they mixed things up:

  • People who had used ChatGPT before were asked to now write essays without it (LLM-to-Brain).
  • People who had never used ChatGPT were now allowed to (Brain-to-LLM).

Only 18 participants completed this fourth session.

What They Measured

They used several ways to assess the participants’ thinking and writing:

  • EEG (electroencephalography): This measures electrical activity in the brain. They looked at brainwaves to see how engaged or active the brain was.
  • Essay analysis: They checked the essays using Natural Language Processing (NLP), human teachers, and anAI-based scoring system.
  • They also looked at how similar or different the essays were (in terms of topics, words used, named entities, etc.).
  • Self-reports: They asked participants how much they “owned” or felt connected to their writing.

What They Found

🧠 Brain Activity:

  • Brain-only group had the strongest and most widely connected brain activity. Their brains were working hard and across many areas.
  • Search Engine group had moderate brain engagement.
  • LLM users had the least brain activity and the weakest connectivity — indicating low mental effort.
  • When LLM users switched to Brain-only, their brain activity stayed low. It was as if their minds were still in “autopilot” mode — under-engaged.
  • When Brain-only users switched to LLM, they had high activity in memory and visual/spatial reasoning areas — kind of like how Google users behaved.

📄 Essay Quality and Similarity:

  • Essays from each group became more similar within their group â€” especially in wording and topics. LLM users’ essays were more homogeneous.
  • LLM users had the lowest sense of ownership of their essays and often couldn’t remember or quote what they had written.
  • Brain-only users had the highest sense of ownership and memory of their writing.

⏳ Long-Term Effects:

  • Over 4 months, the people who used LLMs consistently:
  • Had weaker brain engagement
  • Wrote more similar, less original essays
  • Felt less connected to their work
  • This suggests that relying too much on ChatGPT may make people less mentally engaged and less able to learn deeply.

Bottom Line (I disagree with this)

  • LLMs like ChatGPT make writing easier, but they might also reduce mental effort and learning.
  • This has serious long-term implications for education, especially if students use LLMs without actively thinking.
  • The study doesn’t outright say ChatGPT makes you stupid, but it shows that heavy dependence on AI tools may hinder cognitive growth and originality over time.

Alright, no more AI.

I disagree with the interpretation, and I’ll tell you why.

It seems simple and intuitive to conclude that ChatGPT is making people stupid because of the lowered brain activation in the people who use LLMs over the period of several months.

However, in my mind, there are several problems with that, and it is good that the authors of the study acknowledged limitations and also the need for people to conduct more extensive studies, even as they note that there was no choice of LLMs, the participants were all recruited from nearby universities and were not a diverse sample, and the task in itself was a narrow one.

Firstly, as the researchers admitted, this task was specifically related to essay generation with a limited set of topics.

Secondly, the observed drop in brain connectivity cannot be meaningfully and purely attributed to a decline in cognitive performance but can also be attributed to a reduced engagement in the tasks.

For instance, people who used ChatGPT may not have been so absorbed with their first, second, and third essays and therefore when they came to the final task, they may have just come in with no strong feelings whatsoever.

This can be interpreted as a decline in cognitive performance, but should it be interpreted as such?

The researchers do not tell us, and it is probably something that they did not really look into in the context of this study.

Let’s also now consider another broader point about intelligence at large, now in a Sepupunomics context.

While working memory and brain connectivity might be taken as indicators of intelligence, it is unclear that they are the sole indicators of intelligence — That lack of connectivity or engagement indicates a lack of intelligence.

In fact, what we consider ‘intelligent’ now has changed drastically relative to what we used to understand as intelligent, and given the fluid nature of intelligence throughout the course of history, we have no reason to suppose that the future should be static or unchanging — or that connectivity or engagement in this context indicates the presence or absence of intelligence in a person.

Intelligence in every era has always been defined relative to outcomes that we consider to be valuable or worthwhile; as Naval Ravikant has observed, and I paraphrase, the intelligent man or woman is the one who gets what they want out of life.

In every generation, social and economic conditions have changed, and human beings and our brains have adapted and evolved in relation to those social, economic, and material conditions.

Accordingly, the jobs and the tasks that we now consider valuable have also vastly changed compared to the past – what is valuable human capital, or…

Human capital: the value employees bring to a company that translates to productivity or profitability, and more loosely, the value that human beings bring into an organization (whether a company, a nation-state, or the world) that translates into benefit to the world.

Automated autogates have replaced the toll booth operators that used to sit there lazily one after another, and the ATM has made it so we speak to bank tellers only when there are special circumstances that we cannot deal with; the job that we call a farmer now varies across countries and civilizations and can still in fact mean the small holder carrying a hole and wearing galoshes, or it can mean the grand scale tractor fleet operator running cloud seeding of operations with artificial intelligence.

For many reasons that include these changes in technology, the jobs of our era have changed, the demands of employers have changed in relation to what they need because of how different skills are now required in this era, and what we call or consider valuable human capital has changed – This is not theoretical. It is already happening and it has happened for years, and will continue to happen in the years to come.

Coincidentally, I was speaking with some students earlier and telling them about how nowadays it has become normal and uncontroversial that people no longer remember phone numbers anymore
 But that’s not a bad thing, and neither does it indicate that people have become stupid because now they cannot remember phone numbers.

Rather, it hearkens to the fact that now, what is called cognitive offloading is a possibility – because technology now permits it, we can use phones as external cognitive storage for us, thereby freeing us from dedicating those cognitive resources towards memory. In our modern social context, it would be the person who cannot operate a phone who would likely be considered “stupid”, not the person who cannot remember a phone number but can retrieve it from their device.

The same thing has happened with directions — These no longer preoccupy so many of us because Google Maps has now replaced the need for us to consult physical maps and then discover how to go to certain places, even though that isn’t universally doable and won’t always work with all locations.

My Assessment:

Given the limited nature of the task, the possible alternate interpretations of the data, and the fact that intelligence can certainly be defined in other ways, I cannot conclude that there is a causal impact between usage of ChatGPT and a drop of intelligence or increase in stupidity.

In lieu of that, and in critiquing the study, I would say that ChatGPT allows us to participate in the world in new and different ways, which some might argue is reflective of heightened intelligence and that was not accounted for by the study.

Note for example that the study task is something that very much does not represent best use of ChatGPT — merely using ChatGPT in order to generate essays and then copying and pasting the contents in order to create a Frankenstein creation that the creator, so to speak, had no knowledge of and only was able to appreciate on a surface level.

We are perhaps aware that we should not ask camels to climb trees, fish to fly in the skies, or birds to swim across the English Channel.

That would be absurd and it would be entirely illogical.

It is fun to visualize, though!

In the same way, I think it is silly to suppose that we should evaluate people’s brain waves on ChatGPT and then come up with easy conclusions about whether they have become smarter or dumber when in the very first place, using ChatGPT in that particular situation was analogous to all of the somewhat colourful examples I had provided earlier.

With ChatGPT, people have the ability to explore a very wide range of topics very quickly.

They have the ability to confirm their assumptions, assess their own thinking, ask questions that people would never ask under normal circumstances, and then figure out whether they were correct or if they were wrong and obtain directions for future research.

This cycle of confirmation and disconfirmation, research, understanding, analysis, and synthesis is extremely quick — but none of those things or the ways that they may relate to ‘intelligence’ of the new era is really tested under the condition of being asked to choose a single essay and write it, and none of this is accounted for under the conditions that the researchers placed the participants.

As acknowledged by the researchers, the results that we saw were highly context-dependent — If intelligence is, as Mr. Ravikant said, getting what you want out of life, it seems almost a little silly to imagine that a study involving writing an essay would generalize to the entirety of life and the vast array of situations outside of ChatGPT that a person could ostensibly use it for.

We as laypeople may come up with a hundred misconceptions of what the results may show or what they may show or may not show, and it is entirely a person’s right to talk about what happened to their mother/father/sister/brother/irresponsible child/precautious baby using ChatGPT or anything else they like


But the capital-T Truth remains out there and definitely should be a subject of investigation for the future.

Conclusion:

We cannot unambiguously conclude that.ChatGPT inherently makes you stupid or make you smart — Certainly not from the study. The authors affirm this as well, and the truth, as it turns out, remains a matter of opinion.

Here is mine.

I would not guess that that capital T truth is that with respect to how our society defines or will redefine intelligence at a later state, in consideration of the ocean change that AI is bringing to our world, that people who are using ChatGPT are becoming more stupid; after all, (and this is AI) Malaysia’s MyDIGITAL blueprint and Singapore’s AI governance frameworks both acknowledge that productivity in the 21st century isn’t just about raw mental horsepower — it’s about tool fluency, adaptability, and strategic attention.

Like any other tool, ChatGPT can make you stupid or it can make you smart, depending on how you choose to use it — Calculators can certainly make you dumb if you repeatedly bash them against your head and end up rewiring your brain the wrong way. I suppose, although that’s not really something that people use calculators for, even when we relied upon them. Perhaps we made up for what we lost in arithmetic skills with a greater and vaster exposure to problems that we would never have encountered.

This doesn’t exclude higher levels of talent from emerging within the system as outliers that stand beyond the calculator, the pen, the paper, and certainly ChatGPT – And it also doesn’t exclude the possibility that because of AI and the way that all of us are using it and living through it in this era, that even the paradigm of what we consider worthwhile to teach and to learn both in economics and in life will change along the way.

On my part, I am pretty confident that I have become smarter than I otherwise would have been now as a result of ChatGPT. Relative to what I would have been in an alternate universe where it had not come into existence.

Of course, there is no way to construct a counter-factual or to disconfirm that. But I suppose in the long run and on the balance of things, time will tell — If using this technology continues to help me to get what I want out of life and everything good along the way in ways that continue to affirm my sense that this technology is game-changing, cognitively altering, and a complete break in the way that we used to do things., then I suppose that I will not have been entirely wrong in my assessment.

Thank you for reading, and see you in the next one!

V.

ELIZA – The Chatbot That Started It All.

I’ve been doing quite a bit of reading recently to prepare for BiZ Gear Up on the 24th, and part of that led me to read a bit more about the early history of chatbots – I hope you’ll enjoy this one!

We take a brief moment to move away from the hype that is OpenAI‘s ChatGPT, and take a brief intermission as we make a small trip back in time.

Picture this: it’s the year 1966, and a computer scientist at the Massachusetts Institute of Technology‘s Artificial Intelligence Laboratory named Joseph Weizenbaum has just created something remarkable – the world’s very first chatbot: ELIZA.

No alt text provided for this image
Image credit: Public domain via Wikimedia Commons

Just consider this example of a conversation from Norbert Landsteiner’s 2005 implementation of ELIZA, and you can see what it was capable of.

ELIZA was designed to simulate conversation by responding to typed text with pre-programmed phrases and questions.

But what made ELIZA so special was that it was programmed to mimic the conversational style of a therapist, in particular a Rogerian therapist.

Users could “talk” to ELIZA about their problems and concerns, and the chatbot would respond with empathetic and non-judgmental phrases like “Tell me more about that” or “How does that make you feel?”

It wasn’t just a simple question-and-answer program – it was designed to provide a sense of emotional support and understanding that reflects interestingly on the ways that people derive comfort from self-affirmation.

Weizenbaum didn’t intend for the chatbot to be taken very seriously, calling it a “parody” in his 1976 book “Computer Power and Human Reason”… But the way that the chatbot was received was far from just a parody.

The response to ELIZA was overwhelming.

People were fascinated by this new technology that could seemingly understand and respond to their thoughts and emotions, and the program quickly gained popularity as people tried the chatbot.

But perhaps what’s most remarkable about ELIZA is that it wasn’t just a novelty. Weizenbaum’s creation laid the foundation for decades of research in the field of natural language processing and artificial intelligence.

ELIZA was the first chatbot, but it certainly wouldn’t be the last – and its legacy lives on in the many conversational AI programs we use today, in our Bings, Bards, ChatGPTs, Claudes, and the many more that exist and will exist today.

Can’t wait to see what is to come 🙂

ChatGPT Explodes In Malaysia

ChatGPT’s been around for a while, but as I’ve noted, my home country of Malaysia has been a bit slow in dealing with the hype it’s generated…

But that may have changed.

A while ago, I joined ChatGPT Malaysia.

Today, it exploded and hit 1k, and it shows no sign of stopping; the hype is real!!

Why?

A mixture of Manglish, an anonymous dude who prompted OpenAI’s ChatGPT with the legendary Manglish prompt, the combined efforts of Kenneth Yu Kern San and Jornes Sim, and, I guess, a huge, huge dose of the wonders of ChatGPT x)

My own small contribution (lol!)

I guess I’ve evolved into the comments guy – take a look here if you want 🙂

Ah, what can I say?

It’s truly fascinating to be at the beginning stage of a revolution.

First off, congrats and creds to Pang for being infinitely better than I am at managing communities at the moment – something I’m definitely looking forward to learning more in the days ahead!!

Second off, I care a lot about the deeper significance of things – and I’m incredibly glad that this is one of the many things that’s starting off AI on the right foot in Malaysia, my home country – where this will go and what will happen I have no idea, but really look forward to watching what the world’s going to bring 🙂

Amongst other things, I guess it’s brought a ChatGPT Plus subscription for which now, in order to sign up, you’ve got to join a waitlist.

Can’t wait to see what’s next, and thank you based Sam Altman and OpenAI for creating this gift to humanity 😛

Till the next one!!!

P.S. Is it too ambik peluang if I tell you that I’m the creator of the “Transforming Your (Creative) Writing With ChatGPT” course on Udemy? 😛

Thanks and till next time!

Can you write a Master’s thesis with ChatGPT?

Since the very dawn of time, students have sought new and creative ways to pass their exams that uh, do not include just studying.

People have hidden scraps of paper inside their pens, written down answers on their forearms, transcribed ancient Chinese texts onto underwear


“pls stop” — every teacher throughout history

Even now, there is a disturbing number of articles on WikiHow about cheating, namely 10 Ways To Cheat on a Test Using Body Parts and even 3 Ways to Cheat on a Test Using Pens or Pencils
 (WikiHow, why do you have so many of these????)


And the list goes on!

Now, it’s no secret that it’s very possible to cheat with ChatGPT and that this has thrown educators worldwide for a loop, but then I received a rather funny question earlier on the Artificial Intelligence Megathread that I started on Lowyat.net.

Well.

Could you potentially write a Masters thesis with ChatGPT?

It so happened that someone on the ChatGPT Malaysia Facebook group had asked about the same thing, so I thought ok, let’s make it happen.

Anyway, I was curious about whether it was actually possible, so I decided to give it a go.

Here’s what I asked:

Okay, so at the very least the software proposed a bunch of topics that seemed kind of plausible and interesting.

Anyway, since I’m involved in the education industry and AI – based learning is very interesting to me, I decided to ask ChatGPT to follow up on #7, as follows:

Okay, wow! I had sources too! This was getting interesting! But then


I looked at this, and I was captivated: Was I on my way to get a Master’s degree for this man?

No, wait. Wasn’t this even better? Wasn’t this thing essentially describing the process of creating a personalized new education technology company for me???

I set out in earnest, yearning to go where no man had ever gone before!

Okay, seemed great so far! I ran out of words, though, so I asked ChatGPT to continue:

Okay, uh


Do you see what I’m seeing here?

Rather than actually writing the thesis, ChatGPT was malingering — it was casually not doing what it was told to do, and presenting me with some nonsense summary!

That won’t do! You think just because you’re an AI assistant you get to be lazy?!

I asked it to continue, and provide results in detail.

For about five, this seemed really really plausible, so I was happy again


For about five minutes, before my skepticism began again.


So I checked the references, only to realize that they mostly couldn’t be found anywhere.

Okay, I was thinking to myself.

This is a wonderful software, I declared, trying to beat back the cognitive dissonance.

Surely the third page will be a little bit better? So I thought.

At this point, I realized – ChatGPT had failed.

The two methodology sections contradicted themselves, and there wasn’t a possibility of reconciliation unless I proceeded to prompt chatGPT with the specific information that it actually needed, which I decided not to because the rewarded yielded by that effort would actually be better spent writing the thesis if I actually had a clear idea of how to do so.

So, how do we answer our research question?

With a solid no.

  1. As you can see, there’s a word limit for responses, which means that you will have to re-prompt ChatGPT, which is likely going to lead it to drift from the original prompt.
  2. ChatGPT’s memory for prior responses is about 4000 tokens (words) and it will not completely remember everything that you told it before unless say, you intelligently summarize.
  3. There is no guarantee that the logic or factuality of your piece will be valid or that even any of the sources that you cite will be accessible or even relevant to what you are writing about, as you see from the questionable sources.

Sorry to those of you out there hoping that ChatGPT was going to help you get your Master’s degree, but it’s not gonna happen right now.

Even if you can though, should you? I guess that’s up to each person to decide, but what I would say is that submitting something AI generated for a degree means that you didn’t get the degree — the AI did and got certified and you did not.

Let me not moralize this or romanticize education, but approach the matter in a logical way — when this starts to happen on a large scale, if it does happen, I can imagine that companies or other institutions that used to take these degrees seriously will simply no longer take them seriously, thereby causing degrees as a whole to become about as worthless as MOOCS among prominent companies (i.e. companies that actually generate large amounts of business and have a vested interest in hiring actually talented people) and leading to what we already see, to a degree, in institutions such as tech companies and start-ups
 Whereby many of these companies don’t pay the most attention to the particular degree that you received, but rather whether you are capable of demonstrating the specific skills that they are looking for and communicating your perspective in the course of an interview in which there is no opportunity to make use of AI software.

How will artificial intelligence change not just education, but also the job market at large?

We’ll be finding out, and we’re going to be in for a wild, wild ride!

I’ll have lots more to say about this in the days ahead, so if you would like to read about the intersections between AI, writing, and education, do consider dropping me a follow and I’ll see you in my next pieces!

— V

Quora’s Poe – A First Look.

Stop what you’re doing right now and download Poe!


Unless you’re using an Android phone, in which case maybe you can drop what you’re doing right now, cry, buy an iPhone, then come back.


OKAY NO DON’T KILL ME AAAAAAAAAAAAAAAAAnd now, dear readers, we talk about Poe! 

Generated with Midjourney!

If you don’t know what Poe is, Poe is not this Poe — it’s question and answer platform Quora’s brand new AI baby, now available on iPhone and iPad!

Initially, buying into the hype of the Microsoft and Google AI arms race, I thought that Poe was just a Chatbot that was trained on Quora data and by extension the millions upon millions of questions and answers that it contains, but I realized that it wasn’t (although that’d be cool, though uh… I’m not sure how that’d work considering that nowadays a ton of questions on Quora are kind of powered by people trying to copy-paste ChatGPT in order to rank?).

Anyway, Poe isn’t exactly a chatbot in and of itself – it is what I would describe as a Chatbot aggregator, which means that it collects several different Large Language Models (LLMs) into a single interface; no wonder, considering the fact that the project is called Poe because it’s short for “Project Open Exchange”, which I guess alludes to the fact that the premise is that you can ask questions to a variety of different AI chatbots and receive answers relatively quickly. Already on the platform you will see three main chatbots. The first in the lineup is Sage, the next is Claude, and the last of the bunch is Dragonfly; Sage and Dragonfly are powered by OpenAI, and Claude is powered by Anthropic P. B. C. and its Constitutional AI framework.  

Each has its own methods of going about generating conversations, with OpenAI going for RLHF and Claude through the channel of constitutional AI and harm reduction, but I think you should try these for yourself to see the differences, and perhaps also read this article from Scale to see a detailed comparison.

You’ll see each of these on the left end of the app and you can select them and start talking the way you can with ChatGPT.

Dragonfly is fast, but not significantly faster, I think, compared to Sage at the moment – although I’d probably appreciate that a bit more in the event that this app becomes super popular.

The question you might have is that, apart from the number of models available, how is any part of this experience different from ChatGPT or from Claude, though?

Let’s compare. 

So here, I ask ChatGPT why the sky is blue, and receive a pretty reasonable response. 

It takes about 10 seconds to complete. On the other hand, when we do the same thing with Sage, which is also an OpenAI model, we get the following in 4-5 seconds generation time – which is significantly faster than ChatGPT, but is not bound by the fact that ChatGPT has millions of users concurrently using it… But oh wait.

See the blue links?

When you hit these
 


Sage begins to elaborate while providing further links as well.

To be exact, these aren’t exactly links – when you click them, they prompt the algorithm with a new text prompt that’s related to the words that you searched for, in turn providing more context and elaboration on what you asked about and helping you to fill up those knowledge gaps real quick.

I think that this is super cool and definitely a step in the direction of the future of search, because it does mean that you can go down a daisy chain of conversations, ask questions, have them answered, and be prompted to ask about the things that you don’t know about; I can totally imagine this as a question and answer service and can even imagine something like this replacing Quora, if not for the fact that subjective experiences and updated information were still relevant and important.

While at the moment this is constrained to material within the training set and by extension data limited to 2021, one can only imagine what might happen at a later point when or if these models receive internet access!

Apart from that though, a cool feature of Poe is…

Social:

One of the fun things about Poe is a whole social element that’s integrated into the app itself.

There’s a feature that allows you to share the things that you’re looking at on your feed so that people can see what you generate – essentially, whatever you ask, once you hit the Share icon and then “Share on Poe”…

…Which will let you create a little set of posts that will just randomly show up on people’s feeds, kind of like an implementation of a little TikTok-esque feed full of the prompts that people are publicly sharing around the world and that people can upvote and downvote at their leisure, along with little bubbles that show the kinds of prompts that people have shared at any given point.

I think it’s kind of cool that you get to see these questions, mainly cause you get to see how people around the world are choosing to interact with AI, which creates a (human) communal experience so we can see how people are choosing to prompt things and keeps things light and fun 😀

Here are a couple of examples:

It’s cool to see what people are thinking and writing about, isn’t it? Very much in the spirit of Quora, I think – it does make me wonder if that’s the next evolution of the platform, albeit I still imagine that it might be difficult for algorithms to source personal data or subjective opinions into their training data or make people willingly choose to submit it; perhaps the platforms will coexist? I don’t know.

Anyway, since the thing’s called Poe, I decided to ask Poe to go right ahead and role-play Poe and the Raven.

Uh, towards the end I went and created a bit of an Oxford Union style debate there.

Anyway, I found the chatbot aggregation, links, and social features to be pretty cool and solid features in the app at large, and these all make me really wonder how the platform’s going to evolve in the days ahead.

Some small concluding thoughts:

I guess that Poe’s trying to unite all the different AI models and chatbot applications in one space, and that does kind of make sense, but I would guess that some of the companies that are generating LLMs simply won’t feel the incentive to participate (or won’t have the capital x investment to afford the training costs), while the companies that have generated these APIs would be happy to charge Quora or perhaps at a later point the end users of Poe for using those APIs when they eventually monetize (it’s already happening with ChatGPT), they’ll still be keeping their latest and greatest models for their proprietary usage and for paying users so that these users can use their products beforehand.

Still, what we’ve got here is definitely pretty great already, and I look forward to seeing how this platform’s going to develop in the days ahead!

To end this little exploration, I couldn’t resist making a rap battle about sentient AI with a tiny bonus at the end.

See:

See: https://poe.com/victortan/1512927999828824

And with that, the mic drops. Thanks for reading as always, and over and out!