Artificial Intelligence

No, ChatGPT is NOT making you stupid.

Sepupus, the internet has been abuzz of late because of a new MIT study called “Your Brain on ChatGPT”.

All around on Reddit and the internet, people are starting to form wild conclusions, read patterns in the stars, decide unilaterally or with the agreement of some people out there and everywhere, that somehow now people are being made stupid and MIT researchers have said that it is so and therefore it must be true.

I find it interesting and fascinating.

Now, in what way is this related to economics if at all?

Well, artificial intelligence is a very important part of our economy and it will continue to be important for the foreseeable future, as it shapes and reshapes the economy and how we treat human capital in ways that are intuitive and sometimes unintuitive, in ways more subtle and interesting than the standard narrative of robots replacing human beings may suggest.

It’s interesting to think about it and how it’s going to affect the way that we can live and work in this world which is ever-changing and continually evolving. With that in mind, here’s my perspective.

I do not generally think that ChatGPT is making us stupid.

I read the MIT study earlier, and I broadly understand the way that it is constructed.

You can have a look at it here.

Link: https://arxiv.org/pdf/2506.08872

Basically, what they did was that they asked participants to write SAT-style essays across three sessions chosen from a range of choices in three different groups:

1. One purely using their brains

2. One using Google

3. One using ChatGPT

Then, they had some participants come back for a fourth session where they swapped people from one group to another — 18 people did this in total.

Now this is what ChatGPT says, in summarizing what happened:

(AI generated – also, as a full disclosure, I do use AI-generated content on this website once in a while; consider this a disclosure that you may see AI generated content here once in a while, although I affirm that I will curate it to ensure that it is high quality and it is accurate and matches experience. I hope you won’t mind as what matters more I think is the specific choice of what to show to you rather than the question of whether the content is generated by AI or if it is not!)

What the Study Did

The researchers wanted to understand how using ChatGPT-like tools (called LLMs, or large language models) affects your brain and your essay writing.

They divided participants into three groups:

  1. LLM group — people who used ChatGPT to help write their essays.
  2. Search Engine group — people who could use Google to help them.
  3. Brain-only group — people who weren’t allowed to use any tools; they just used their brains.

Each person wrote three essays under their assigned condition.

In Session 4, they mixed things up:

  • People who had used ChatGPT before were asked to now write essays without it (LLM-to-Brain).
  • People who had never used ChatGPT were now allowed to (Brain-to-LLM).

Only 18 participants completed this fourth session.

What They Measured

They used several ways to assess the participants’ thinking and writing:

  • EEG (electroencephalography): This measures electrical activity in the brain. They looked at brainwaves to see how engaged or active the brain was.
  • Essay analysis: They checked the essays using Natural Language Processing (NLP), human teachers, and anAI-based scoring system.
  • They also looked at how similar or different the essays were (in terms of topics, words used, named entities, etc.).
  • Self-reports: They asked participants how much they “owned” or felt connected to their writing.

What They Found

🧠 Brain Activity:

  • Brain-only group had the strongest and most widely connected brain activity. Their brains were working hard and across many areas.
  • Search Engine group had moderate brain engagement.
  • LLM users had the least brain activity and the weakest connectivity — indicating low mental effort.
  • When LLM users switched to Brain-only, their brain activity stayed low. It was as if their minds were still in “autopilot” mode — under-engaged.
  • When Brain-only users switched to LLM, they had high activity in memory and visual/spatial reasoning areas — kind of like how Google users behaved.

📄 Essay Quality and Similarity:

  • Essays from each group became more similar within their group — especially in wording and topics. LLM users’ essays were more homogeneous.
  • LLM users had the lowest sense of ownership of their essays and often couldn’t remember or quote what they had written.
  • Brain-only users had the highest sense of ownership and memory of their writing.

⏳ Long-Term Effects:

  • Over 4 months, the people who used LLMs consistently:
  • Had weaker brain engagement
  • Wrote more similar, less original essays
  • Felt less connected to their work
  • This suggests that relying too much on ChatGPT may make people less mentally engaged and less able to learn deeply.

Bottom Line (I disagree with this)

  • LLMs like ChatGPT make writing easier, but they might also reduce mental effort and learning.
  • This has serious long-term implications for education, especially if students use LLMs without actively thinking.
  • The study doesn’t outright say ChatGPT makes you stupid, but it shows that heavy dependence on AI tools may hinder cognitive growth and originality over time.

Alright, no more AI.

I disagree with the interpretation, and I’ll tell you why.

It seems simple and intuitive to conclude that ChatGPT is making people stupid because of the lowered brain activation in the people who use LLMs over the period of several months.

However, in my mind, there are several problems with that, and it is good that the authors of the study acknowledged limitations and also the need for people to conduct more extensive studies, even as they note that there was no choice of LLMs, the participants were all recruited from nearby universities and were not a diverse sample, and the task in itself was a narrow one.

Firstly, as the researchers admitted, this task was specifically related to essay generation with a limited set of topics.

Secondly, the observed drop in brain connectivity cannot be meaningfully and purely attributed to a decline in cognitive performance but can also be attributed to a reduced engagement in the tasks.

For instance, people who used ChatGPT may not have been so absorbed with their first, second, and third essays and therefore when they came to the final task, they may have just come in with no strong feelings whatsoever.

This can be interpreted as a decline in cognitive performance, but should it be interpreted as such?

The researchers do not tell us, and it is probably something that they did not really look into in the context of this study.

Let’s also now consider another broader point about intelligence at large, now in a Sepupunomics context.

While working memory and brain connectivity might be taken as indicators of intelligence, it is unclear that they are the sole indicators of intelligence — That lack of connectivity or engagement indicates a lack of intelligence.

In fact, what we consider ‘intelligent’ now has changed drastically relative to what we used to understand as intelligent, and given the fluid nature of intelligence throughout the course of history, we have no reason to suppose that the future should be static or unchanging — or that connectivity or engagement in this context indicates the presence or absence of intelligence in a person.

Intelligence in every era has always been defined relative to outcomes that we consider to be valuable or worthwhile; as Naval Ravikant has observed, and I paraphrase, the intelligent man or woman is the one who gets what they want out of life.

In every generation, social and economic conditions have changed, and human beings and our brains have adapted and evolved in relation to those social, economic, and material conditions.

Accordingly, the jobs and the tasks that we now consider valuable have also vastly changed compared to the past – what is valuable human capital, or…

Human capital: the value employees bring to a company that translates to productivity or profitability, and more loosely, the value that human beings bring into an organization (whether a company, a nation-state, or the world) that translates into benefit to the world.

Automated autogates have replaced the toll booth operators that used to sit there lazily one after another, and the ATM has made it so we speak to bank tellers only when there are special circumstances that we cannot deal with; the job that we call a farmer now varies across countries and civilizations and can still in fact mean the small holder carrying a hole and wearing galoshes, or it can mean the grand scale tractor fleet operator running cloud seeding of operations with artificial intelligence.

For many reasons that include these changes in technology, the jobs of our era have changed, the demands of employers have changed in relation to what they need because of how different skills are now required in this era, and what we call or consider valuable human capital has changed – This is not theoretical. It is already happening and it has happened for years, and will continue to happen in the years to come.

Coincidentally, I was speaking with some students earlier and telling them about how nowadays it has become normal and uncontroversial that people no longer remember phone numbers anymore… But that’s not a bad thing, and neither does it indicate that people have become stupid because now they cannot remember phone numbers.

Rather, it hearkens to the fact that now, what is called cognitive offloading is a possibility – because technology now permits it, we can use phones as external cognitive storage for us, thereby freeing us from dedicating those cognitive resources towards memory. In our modern social context, it would be the person who cannot operate a phone who would likely be considered “stupid”, not the person who cannot remember a phone number but can retrieve it from their device.

The same thing has happened with directions — These no longer preoccupy so many of us because Google Maps has now replaced the need for us to consult physical maps and then discover how to go to certain places, even though that isn’t universally doable and won’t always work with all locations.

My Assessment:

Given the limited nature of the task, the possible alternate interpretations of the data, and the fact that intelligence can certainly be defined in other ways, I cannot conclude that there is a causal impact between usage of ChatGPT and a drop of intelligence or increase in stupidity.

In lieu of that, and in critiquing the study, I would say that ChatGPT allows us to participate in the world in new and different ways, which some might argue is reflective of heightened intelligence and that was not accounted for by the study.

Note for example that the study task is something that very much does not represent best use of ChatGPT — merely using ChatGPT in order to generate essays and then copying and pasting the contents in order to create a Frankenstein creation that the creator, so to speak, had no knowledge of and only was able to appreciate on a surface level.

We are perhaps aware that we should not ask camels to climb trees, fish to fly in the skies, or birds to swim across the English Channel.

That would be absurd and it would be entirely illogical.

It is fun to visualize, though!

In the same way, I think it is silly to suppose that we should evaluate people’s brain waves on ChatGPT and then come up with easy conclusions about whether they have become smarter or dumber when in the very first place, using ChatGPT in that particular situation was analogous to all of the somewhat colourful examples I had provided earlier.

With ChatGPT, people have the ability to explore a very wide range of topics very quickly.

They have the ability to confirm their assumptions, assess their own thinking, ask questions that people would never ask under normal circumstances, and then figure out whether they were correct or if they were wrong and obtain directions for future research.

This cycle of confirmation and disconfirmation, research, understanding, analysis, and synthesis is extremely quick — but none of those things or the ways that they may relate to ‘intelligence’ of the new era is really tested under the condition of being asked to choose a single essay and write it, and none of this is accounted for under the conditions that the researchers placed the participants.

As acknowledged by the researchers, the results that we saw were highly context-dependent — If intelligence is, as Mr. Ravikant said, getting what you want out of life, it seems almost a little silly to imagine that a study involving writing an essay would generalize to the entirety of life and the vast array of situations outside of ChatGPT that a person could ostensibly use it for.

We as laypeople may come up with a hundred misconceptions of what the results may show or what they may show or may not show, and it is entirely a person’s right to talk about what happened to their mother/father/sister/brother/irresponsible child/precautious baby using ChatGPT or anything else they like…

But the capital-T Truth remains out there and definitely should be a subject of investigation for the future.

Conclusion:

We cannot unambiguously conclude that.ChatGPT inherently makes you stupid or make you smart — Certainly not from the study. The authors affirm this as well, and the truth, as it turns out, remains a matter of opinion.

Here is mine.

I would not guess that that capital T truth is that with respect to how our society defines or will redefine intelligence at a later state, in consideration of the ocean change that AI is bringing to our world, that people who are using ChatGPT are becoming more stupid; after all, (and this is AI) Malaysia’s MyDIGITAL blueprint and Singapore’s AI governance frameworks both acknowledge that productivity in the 21st century isn’t just about raw mental horsepower — it’s about tool fluency, adaptability, and strategic attention.

Like any other tool, ChatGPT can make you stupid or it can make you smart, depending on how you choose to use it — Calculators can certainly make you dumb if you repeatedly bash them against your head and end up rewiring your brain the wrong way. I suppose, although that’s not really something that people use calculators for, even when we relied upon them. Perhaps we made up for what we lost in arithmetic skills with a greater and vaster exposure to problems that we would never have encountered.

This doesn’t exclude higher levels of talent from emerging within the system as outliers that stand beyond the calculator, the pen, the paper, and certainly ChatGPT – And it also doesn’t exclude the possibility that because of AI and the way that all of us are using it and living through it in this era, that even the paradigm of what we consider worthwhile to teach and to learn both in economics and in life will change along the way.

On my part, I am pretty confident that I have become smarter than I otherwise would have been now as a result of ChatGPT. Relative to what I would have been in an alternate universe where it had not come into existence.

Of course, there is no way to construct a counter-factual or to disconfirm that. But I suppose in the long run and on the balance of things, time will tell — If using this technology continues to help me to get what I want out of life and everything good along the way in ways that continue to affirm my sense that this technology is game-changing, cognitively altering, and a complete break in the way that we used to do things., then I suppose that I will not have been entirely wrong in my assessment.

Thank you for reading, and see you in the next one!

V.

Future of AI Meetup at the Asia School of Business!

In the past couple of weeks, I’ve had the chance to speak at a couple of events concerning generative AI, most recently the Future of AI meetup with NextUpAsia at the Asia School of Business!

As one of the invited speakers for this event, I had the chance to talk a lot about the different uses of AI in enterprise, drawing a distinction between generative AI and AI for process improvement, and also got the chance to highlight my thoughts about artificial intelligence in the context of business and in education, to demonstrate Midjourney for enterprises (and to create a t-shirt design for the Asia School of Business!), and to meet many new and interesting collaborators with whom I think there will be lots of unique opportunities to work together.

Some of these clips show you what happened at the session:

#1: Panel discussion about generative AI technologies.

During this part of the session, my fellow panelists Johnson Goh, Shahbaaz D’Ali, and Jason Sosa had a lively discussion with the audience about the present and future of AI technologies, touching upon the impact of generative AI on different industries, as well as some of the core limitations of generative AI.

Everyone came well-prepared with examples and topics to discuss, and it was mindblowing to watch the average level of discussion on the floor that night! My particular contribution to the session was to discuss the concept of the AI hallucination, which I’ll probably speak a little bit more about in another post later on, during which I observed that rather than replacing humans, it’s likely that generative AI will simply create an increased demand for humans who can exercise higher quality critical thinking and judgment in the future.

#2: Education in AI – A Discussion.

During this part of the session, I had the chance to highlight some of the key challenges and opportunities posed by Artificial Intelligence in education, and to also draw attention to the recent (and admirable!) actions of Singapore’s Minister Chan Chun Sing in highlighting an AI strategy for the Singaporean education system alongside speaking about some of the challenges and opportunities that students will face in the future as a consequence of the development of generative AI technologies.

Thoughts about the education system and AI preparedness.

#3: A Midjourney Enterprise demonstration.

This was rather spontaneous, but I had the chance to conduct a training about the ways that ChatGPT and Midjourney can integrate with one another in order to create something that is greater than the sum of their parts, by facilitating the process of image generation at scale.

Surprisingly, that’s not all as boring as it might initially sound, because we had the chance to create foods from a Ramadhan bazaar and to showcase these as proofs of concept to business owners along the way as well!

ChatGPT and Midjourney demonstration!

Conclusion:

Overall, this talk was an incredible experience that I think has opened up a host of interesting new opportunities and a new frontier for me, as well as an enjoyable evening where I truly felt that I was living the life of the mind as I participated in a conversation that no doubt will continue to dominate the consciousness of people around the world in the days moving forward.

It was a huge privilege to be a part of the AI conversation and to begin talking about the ways in which AI can be used by businesses as part of their journey onward into the 21st century, and I’m thrilled to look at the opportunities ahead in the days to come 🙂

Thank you to NextUpAsia and the Asia School of Business established in collaboration with MIT for having me!

P.S. Work has been busy, but it’s beginning to dovetail more with writing and creating a lot more – I’ll try to write more regularly soon!

ChatGPT For Software Development

Today, I had the incredible opportunity to interview a software developer from Maps72 about the role of ChatGPT in software development, and you’ll have the opportunity to watch it right here:

For a description of the event, I guess you could just read the description that I placed on LinkedIn (Feel free to connect!):

It was my first time hosting an event and a chat like this in a while, and I really enjoyed learning from Rain throughout the course of this rousing conversation. Please enjoy the chat and what it brings, and I hope that you learn a ton from it!

I’ll be trying out a tool that Si Eian (one of my numerous friends from the ChatGPT group) told me about recently, which allows a person to just go right ahead, summarize a piece of work, and then have the takeaways all available and on hand – but if you have the time, I highly recommend that you take the time to listen to Rain and everything that he has to say because he is a consummate professional, a wonderful speaker, and it’s going to give you specifics for how you can level up your software development game in 2023. 🙂

Also, hmm – I know that I was a little bit emo in the post a little earlier today, but trust me when I say that a lot of interesting things are lining up in this world for me for some reason.

I don’t quite know why and I don’t quite know how, but they are somehow – and I’m very happy for it.

Lots of new friends along the way, lots of new joys, and lots of peace that each day is somehow meaningful, educational, a chance to share new things with the world that I would have never imagined having just a while ago 🙂

Cheers and here’s to the next part!!

ELIZA – The Chatbot That Started It All.

I’ve been doing quite a bit of reading recently to prepare for BiZ Gear Up on the 24th, and part of that led me to read a bit more about the early history of chatbots – I hope you’ll enjoy this one!

We take a brief moment to move away from the hype that is OpenAI‘s ChatGPT, and take a brief intermission as we make a small trip back in time.

Picture this: it’s the year 1966, and a computer scientist at the Massachusetts Institute of Technology‘s Artificial Intelligence Laboratory named Joseph Weizenbaum has just created something remarkable – the world’s very first chatbot: ELIZA.

No alt text provided for this image
Image credit: Public domain via Wikimedia Commons

Just consider this example of a conversation from Norbert Landsteiner’s 2005 implementation of ELIZA, and you can see what it was capable of.

ELIZA was designed to simulate conversation by responding to typed text with pre-programmed phrases and questions.

But what made ELIZA so special was that it was programmed to mimic the conversational style of a therapist, in particular a Rogerian therapist.

Users could “talk” to ELIZA about their problems and concerns, and the chatbot would respond with empathetic and non-judgmental phrases like “Tell me more about that” or “How does that make you feel?”

It wasn’t just a simple question-and-answer program – it was designed to provide a sense of emotional support and understanding that reflects interestingly on the ways that people derive comfort from self-affirmation.

Weizenbaum didn’t intend for the chatbot to be taken very seriously, calling it a “parody” in his 1976 book “Computer Power and Human Reason”… But the way that the chatbot was received was far from just a parody.

The response to ELIZA was overwhelming.

People were fascinated by this new technology that could seemingly understand and respond to their thoughts and emotions, and the program quickly gained popularity as people tried the chatbot.

But perhaps what’s most remarkable about ELIZA is that it wasn’t just a novelty. Weizenbaum’s creation laid the foundation for decades of research in the field of natural language processing and artificial intelligence.

ELIZA was the first chatbot, but it certainly wouldn’t be the last – and its legacy lives on in the many conversational AI programs we use today, in our Bings, Bards, ChatGPTs, Claudes, and the many more that exist and will exist today.

Can’t wait to see what is to come 🙂

ChatGPT Explodes In Malaysia

ChatGPT’s been around for a while, but as I’ve noted, my home country of Malaysia has been a bit slow in dealing with the hype it’s generated…

But that may have changed.

A while ago, I joined ChatGPT Malaysia.

Today, it exploded and hit 1k, and it shows no sign of stopping; the hype is real!!

Why?

A mixture of Manglish, an anonymous dude who prompted OpenAI’s ChatGPT with the legendary Manglish prompt, the combined efforts of Kenneth Yu Kern San and Jornes Sim, and, I guess, a huge, huge dose of the wonders of ChatGPT x)

My own small contribution (lol!)

I guess I’ve evolved into the comments guy – take a look here if you want 🙂

Ah, what can I say?

It’s truly fascinating to be at the beginning stage of a revolution.

First off, congrats and creds to Pang for being infinitely better than I am at managing communities at the moment – something I’m definitely looking forward to learning more in the days ahead!!

Second off, I care a lot about the deeper significance of things – and I’m incredibly glad that this is one of the many things that’s starting off AI on the right foot in Malaysia, my home country – where this will go and what will happen I have no idea, but really look forward to watching what the world’s going to bring 🙂

Amongst other things, I guess it’s brought a ChatGPT Plus subscription for which now, in order to sign up, you’ve got to join a waitlist.

Can’t wait to see what’s next, and thank you based Sam Altman and OpenAI for creating this gift to humanity 😛

Till the next one!!!

P.S. Is it too ambik peluang if I tell you that I’m the creator of the “Transforming Your (Creative) Writing With ChatGPT” course on Udemy? 😛

Thanks and till next time!

A Strange War: AI v. AI Detectors

“Upload your essay into Turnitin by 11:59pm on Thursday night?
You meant start the essay at 11:57 then submit it at 11:58, am I right?” — 
Gigachad ChatGPT student.

We begin our discussion by discussing the sweet smell of plagiarism.

It wafts in the air as educators run around like headless chickens, looking here, looking there as they flip through oddly good essays with panicked expressions.

“Was this AI-generated, Bobby?!” says a hapless teacher, staring at a piece of paper that seems curiously bereft of grammatical errors, suspecting that Bobby could never have created something of this caliber.

“No teacher, I just became smart!” Bobby cries, running off into the sunset because he is sad, he is going to become a member of an emo boyband, and he doesn’t want to admit that he generated his homework with ChatGPT.

generated by Midjourney, if that wasn’t obvious

This smell casts fear and trepidation over every single part of our education system, for it threatens to break it; after all, education is special and it is meant to be sacrosanct — after all, is it not the very same system that is designed to teach humans facts and knowledge and above all, to communicate and collaborate to solve the problems of our era with intelligence, initiative, and drive?

It’s unsurprising that the world of education has flipped out over ChatGPT, because artificial intelligence opens up the very real possibility that schools may be unable to detect it.

Fun and games, right? It’s just a bunch of kids cheating on assignments with artificial intelligence? It’s not going to affect the older generation?

As it turns out, no — that’s not the case. I’ll explain why later.

But before that, let’s talk a bit about the part of our education system that AI is threatening: Essay-writing.

If students simply choose to let their work be completed by artificial intelligence and forget all else, that just means that they’ve forgone the education that they’re supposed to have received, thereby crippling them by an act of personal choice, right…?

But each of us has been a student, and if we have children, our children either will be or have been students too; there is a deep emotional connection that stretches across the entire world when it comes to this.

Therefore, when Princeton University CS student Edward Tian swooped in to offer a solution,it’s not all that surprising that the world flipped.

Enter GPTZero.

Humans deserve the truth. A noble statement and a very bold one for a plagiarism detector, but something that’s a little deeper than most of us would probably imagine.

But consider this.

Not everyone who uses AI text is cheating in the sense of doing something that they are not supposed to and thereby violating rules, therefore the word ‘plagiarism detector’ doesn’t quite or always apply here.

This algorithm, as with other algorithms that attempt to detect AI-generated text, is not just a plagiarism detector that merely serves to catch students in petty acts of cheating —it is an AI detector.

An AI Detector At Work.

So how does it work?

GPTZero assigns a likelihood that a particular text is generated by AI by using two measures:

Perplexity, and Burstiness.

Essentially, in more human language than that which was presented on GPTZero’s website, GPTZero says that…

The less random the text (its ‘perplexity’), the more likely it was generated by an AI.

The less that randomness changes throughout time (its ‘burstiness’), the more likely that the text was generated by AI.

Anyway, GPTZero gives each text a score for perplexity and burstiness, and from there, outputs a probability that given sentences of a text were generated by AI, highlighting the relevant sentences, and easily displays the result to the user.

Alright, sounds great!

Does GPTZero deserve the hype, though?

…Does this actually work?

Let’s try it with this pleasant and AI-generated text that is exactly about the importance of hype (lol).

That’s 100% AI-generated and we know that as fact.

…Would we know if we didn’t see it in the ChatGPT terminal window, though?

…Okay, let’s not think about that.

Down the hatch…

…And boom.

As we can see, GPTZero, humanity’s champion, managed to identify that the text that we had generated was written by AI.

Hurrah!!!

Or…?

I proceeded to rewrite the essay with another AI software.

…After which GPTZero essentially declared:

So nope, GPTZero can’t detect rewritten texts that were generated with AI — which it should be able to if it truly is an *AI* detector in the best sense — and which in turn suggests that the way that it’s been operationalized has yet to allow it to be the bastion protecting humanity from the incursion of robots into our lives.

It’s not that GPTZero — or even OpenAI’s own AI Text Reviewer, amongst a whole panoply of different AI detectors – are bad or poorly operationalized, by any means. Rather, it’s that the operationalization is supremely difficult because the task is punishingly hard, and that we are unlikely to have a tool that can detect AI-generated text 100% unless we perform watermarking (MIT Technology Review) and we would have to use multiple algorithms to be able to detect text, or come up with alternate measures to do so.

An Arms Race between AI Large Language Models (LLMS) and AI Detectors — and why you should care (even if you’re not a student).

As I’ve mentioned, there is an arms race at hand between AI Large Language Models (LLMs) like ChatGPT, and AI detectors like GPTZero, the consequence of which is likely that the two will compete with one another and each will make progress in its own way, progressing the direction of both technologies forward.

Personally, I think that AI detectors are fighting a losing battle against LLMs for many reasons, but let me not put the cart before the horse — it is a battle to watch, not to predict the outcome of before it’s even begun.

Implications of this strange war:

But why should you care about any of this if you’re not a student? It’s not like you’re going to be looking at essays constantly, right?

Let’s take aside the fact that you’re reading a blog post right now, and let’s also move away purely from the plagiarized essay bit that we’ve been thinking about, as we gravitate towards thinking about how ChatGPT is a language model.

It’s a good bet that you use language everywhere in your life, business, and relationships with other people in order to communicate, coordinate, and everything else.

When we go around on the Internet, it’s not always immediately evident what was AI generated, what was generated by a human or, for that matter, what was inspired by an AI and later followed through by a human.

The whole reason we need something like a plagiarism detector is that we may not even be sure that a particular piece of language (which we most often experience in the form of text) is AI-generated with our own eyes and minds, to the point that we need to literally rely upon statistical patterns in order to evaluate some thing that we are looking at directly in front of us, thereby recruiting our brains as we evaluate the entirety of an output.

The problem is…

Language doesn’t just exist as text.

Language exists as text, yes, but also as speech. Moreover, speech and text are easily convertible to one another — and we know very well what ChatGPT is doing: Generating text.

We now know that there are Text To Speech (TTS) models that generate speech from text. They’re not necessarily all great, but that’s besides the point — it presages the translation from text into voice.

Think about it.

If the voices that are generated by AI become sufficiently realistic-sounding and their intonations (VALL-E, is that you?), how might you know that these voices aren’t real unless there are severe model safeguards that impede the models from functioning as they are supposed to?

Now combine that indistinguishable voice with sophisticated ChatGPT output that can evade any AI detector and in turn may, depending on the features that end up developing, evade your own capacity to tell whether you are even interacting with a human or not.

How would that play out in the metaverse?

How would that play out in the real world, over the phone?

How would you ever know whether anyone that you’re interacting with is real or not? Whether they are sentient?

The battle between AI and AI Detectors is not just a battle over the difference between an A grade and a C grade.

It’s a battle over a future where what’s at stake is identifying what even qualifies as human.

How AI Tech Will Disrupt Businesses (24th February)

Do you ever feel like you might have gotten yourself in something a little bigger than you’d imagined was possible?

Excited to announce that I’ll be speaking about AI for the “How AI Tech Will Disrupt Businesses” panel on the 24th of February! Thank you Vulcan Post for the feature and MrMoney TV x Entrepreneurs and Startups Malaysia for the invitation!

You’ll be able to meet me there directly and hear me talk about the ways AI is going to change businesses around the world alongside my fellow panelist, Richard Ker!

If you’ve not heard of Richard, the man is a legend at creating incredible infographics and marketing, and I respect both his trite observations and the value that he’s created for literally thousands of people throughout Malaysia and far beyond; the man is a true blue digital authority, If you’re looking for something specific, feel free to check out this article that he’s written on Facebook, amongst other things; the man is everywhere!

In other words, what does this mean?

It means I need to level up!

This conference is something that I’m truly honored to be a part of, and a wonderful opportunity to learn from many incredible minds that I won’t be missing by any stretch of the imagination.

As we speak, I’m preparing for with all my might at the moment even as I read and learn more about artificial intelligence, building up that reading habit again thoughtfully documented by my dear friend Sandy Clarke and that I’ll make sure to work towards in the days ahead as I build this platform.

Meanwhile, if you haven’t already, please feel free to join Artificial Intelligence Malaysia! I’ve had some pretty wild conversations in the past day or so, and it would be great to add a diversity of voices to the group especially if you’re really interested in AI and everything that it has to offer 🙂

In preparation for that, know that I’ve been reading extensively and creating lots of other content as well because I know that anything I have to make this worth your time, and will do my very best to do so.

Till we meet, then!

Quora’s Poe – A First Look.

Stop what you’re doing right now and download Poe!

…Unless you’re using an Android phone, in which case maybe you can drop what you’re doing right now, cry, buy an iPhone, then come back.

…OKAY NO DON’T KILL ME AAAAAAAAAAAAAAAAAnd now, dear readers, we talk about Poe! 

Generated with Midjourney!

If you don’t know what Poe is, Poe is not this Poe — it’s question and answer platform Quora’s brand new AI baby, now available on iPhone and iPad!

Initially, buying into the hype of the Microsoft and Google AI arms race, I thought that Poe was just a Chatbot that was trained on Quora data and by extension the millions upon millions of questions and answers that it contains, but I realized that it wasn’t (although that’d be cool, though uh… I’m not sure how that’d work considering that nowadays a ton of questions on Quora are kind of powered by people trying to copy-paste ChatGPT in order to rank?).

Anyway, Poe isn’t exactly a chatbot in and of itself – it is what I would describe as a Chatbot aggregator, which means that it collects several different Large Language Models (LLMs) into a single interface; no wonder, considering the fact that the project is called Poe because it’s short for “Project Open Exchange”, which I guess alludes to the fact that the premise is that you can ask questions to a variety of different AI chatbots and receive answers relatively quickly. Already on the platform you will see three main chatbots. The first in the lineup is Sage, the next is Claude, and the last of the bunch is Dragonfly; Sage and Dragonfly are powered by OpenAI, and Claude is powered by Anthropic P. B. C. and its Constitutional AI framework.  

Each has its own methods of going about generating conversations, with OpenAI going for RLHF and Claude through the channel of constitutional AI and harm reduction, but I think you should try these for yourself to see the differences, and perhaps also read this article from Scale to see a detailed comparison.

You’ll see each of these on the left end of the app and you can select them and start talking the way you can with ChatGPT.

Dragonfly is fast, but not significantly faster, I think, compared to Sage at the moment – although I’d probably appreciate that a bit more in the event that this app becomes super popular.

The question you might have is that, apart from the number of models available, how is any part of this experience different from ChatGPT or from Claude, though?

Let’s compare. 

So here, I ask ChatGPT why the sky is blue, and receive a pretty reasonable response. 

It takes about 10 seconds to complete. On the other hand, when we do the same thing with Sage, which is also an OpenAI model, we get the following in 4-5 seconds generation time – which is significantly faster than ChatGPT, but is not bound by the fact that ChatGPT has millions of users concurrently using it… But oh wait.

See the blue links?

When you hit these… 

…Sage begins to elaborate while providing further links as well.

To be exact, these aren’t exactly links – when you click them, they prompt the algorithm with a new text prompt that’s related to the words that you searched for, in turn providing more context and elaboration on what you asked about and helping you to fill up those knowledge gaps real quick.

I think that this is super cool and definitely a step in the direction of the future of search, because it does mean that you can go down a daisy chain of conversations, ask questions, have them answered, and be prompted to ask about the things that you don’t know about; I can totally imagine this as a question and answer service and can even imagine something like this replacing Quora, if not for the fact that subjective experiences and updated information were still relevant and important.

While at the moment this is constrained to material within the training set and by extension data limited to 2021, one can only imagine what might happen at a later point when or if these models receive internet access!

Apart from that though, a cool feature of Poe is…

Social:

One of the fun things about Poe is a whole social element that’s integrated into the app itself.

There’s a feature that allows you to share the things that you’re looking at on your feed so that people can see what you generate – essentially, whatever you ask, once you hit the Share icon and then “Share on Poe”…

…Which will let you create a little set of posts that will just randomly show up on people’s feeds, kind of like an implementation of a little TikTok-esque feed full of the prompts that people are publicly sharing around the world and that people can upvote and downvote at their leisure, along with little bubbles that show the kinds of prompts that people have shared at any given point.

I think it’s kind of cool that you get to see these questions, mainly cause you get to see how people around the world are choosing to interact with AI, which creates a (human) communal experience so we can see how people are choosing to prompt things and keeps things light and fun 😀

Here are a couple of examples:

It’s cool to see what people are thinking and writing about, isn’t it? Very much in the spirit of Quora, I think – it does make me wonder if that’s the next evolution of the platform, albeit I still imagine that it might be difficult for algorithms to source personal data or subjective opinions into their training data or make people willingly choose to submit it; perhaps the platforms will coexist? I don’t know.

Anyway, since the thing’s called Poe, I decided to ask Poe to go right ahead and role-play Poe and the Raven.

Uh, towards the end I went and created a bit of an Oxford Union style debate there.

Anyway, I found the chatbot aggregation, links, and social features to be pretty cool and solid features in the app at large, and these all make me really wonder how the platform’s going to evolve in the days ahead.

Some small concluding thoughts:

I guess that Poe’s trying to unite all the different AI models and chatbot applications in one space, and that does kind of make sense, but I would guess that some of the companies that are generating LLMs simply won’t feel the incentive to participate (or won’t have the capital x investment to afford the training costs), while the companies that have generated these APIs would be happy to charge Quora or perhaps at a later point the end users of Poe for using those APIs when they eventually monetize (it’s already happening with ChatGPT), they’ll still be keeping their latest and greatest models for their proprietary usage and for paying users so that these users can use their products beforehand.

Still, what we’ve got here is definitely pretty great already, and I look forward to seeing how this platform’s going to develop in the days ahead!

To end this little exploration, I couldn’t resist making a rap battle about sentient AI with a tiny bonus at the end.

See:

See: https://poe.com/victortan/1512927999828824

And with that, the mic drops. Thanks for reading as always, and over and out!