Today, I had the incredible opportunity to interview a software developer from Maps72 about the role of ChatGPT in software development, and you’ll have the opportunity to watch it right here:
It was my first time hosting an event and a chat like this in a while, and I really enjoyed learning from Rain throughout the course of this rousing conversation. Please enjoy the chat and what it brings, and I hope that you learn a ton from it!
I’ll be trying out a tool that Si Eian (one of my numerous friends from the ChatGPT group) told me about recently, which allows a person to just go right ahead, summarize a piece of work, and then have the takeaways all available and on hand – but if you have the time, I highly recommend that you take the time to listen to Rain and everything that he has to say because he is a consummate professional, a wonderful speaker, and it’s going to give you specifics for how you can level up your software development game in 2023. 🙂
Also, hmm – I know that I was a little bit emo in the post a little earlier today, but trust me when I say that a lot of interesting things are lining up in this world for me for some reason.
I don’t quite know why and I don’t quite know how, but they are somehow – and I’m very happy for it.
Lots of new friends along the way, lots of new joys, and lots of peace that each day is somehow meaningful, educational, a chance to share new things with the world that I would have never imagined having just a while ago 🙂
You know, when I think back about the person that I was a long time ago, I wonder how I managed to become the person that I am today, and I don’t see that as an exaggeration.
I used to be the kind of person who would dream of becoming an investment banker – that’s the kind of person that I was; I would imagine how it would look like when I ended up working for 12, 13, 14, 15 hours a day, thinking to myself that that was the ideal.
But no, that wasn’t how things turned out.
I changed.
I became someone who wanted to make things happen for himself, to create something from the fabric of essentially nothing.
Why that is? Maybe it was a combination of different things – a combination of watching narcissism and the incessant desire to compare disappear, a sense of entitlement that placed me beyond others fading away, and a few other things that were tied to my identity and who I was in the past.
I’ve been doing quite a bit of reading recently to prepare for BiZ Gear Up on the 24th, and part of that led me to read a bit more about the early history of chatbots – I hope you’ll enjoy this one!
We take a brief moment to move away from the hype that is OpenAI‘s ChatGPT, and take a brief intermission as we make a small trip back in time.
Picture this: it’s the year 1966, and a computer scientist at the Massachusetts Institute of Technology‘s Artificial Intelligence Laboratory named Joseph Weizenbaum has just created something remarkable – the world’s very first chatbot: ELIZA.
Just consider this example of a conversation from Norbert Landsteiner’s 2005 implementation of ELIZA, and you can see what it was capable of.
ELIZA was designed to simulate conversation by responding to typed text with pre-programmed phrases and questions.
But what made ELIZA so special was that it was programmed to mimic the conversational style of a therapist, in particular a Rogerian therapist.
Users could “talk” to ELIZA about their problems and concerns, and the chatbot would respond with empathetic and non-judgmental phrases like “Tell me more about that” or “How does that make you feel?”
It wasn’t just a simple question-and-answer program – it was designed to provide a sense of emotional support and understanding that reflects interestingly on the ways that people derive comfort from self-affirmation.
Weizenbaum didn’t intend for the chatbot to be taken very seriously, calling it a “parody” in his 1976 book “Computer Power and Human Reason”… But the way that the chatbot was received was far from just a parody.
The response to ELIZA was overwhelming.
People were fascinated by this new technology that could seemingly understand and respond to their thoughts and emotions, and the program quickly gained popularity as people tried the chatbot.
But perhaps what’s most remarkable about ELIZA is that it wasn’t just a novelty. Weizenbaum’s creation laid the foundation for decades of research in the field of natural language processing and artificial intelligence.
ELIZA was the first chatbot, but it certainly wouldn’t be the last – and its legacy lives on in the many conversational AI programs we use today, in our Bings, Bards, ChatGPTs, Claudes, and the many more that exist and will exist today.
Today, I delivered a quick seminar about ChatGPT, education, and the AI industry!
It was the first of three speaking events that I’m doing this week that lead up to something that’s a bit bigger.
We talked about a few things related to ChatGPT and the education system – mostly thoughts that have to do with the way Singapore’s been responding to these things, which has been pretty materially different compared to the way that pretty much any other country in the world has.
You’ll be able to watch the video here:
If you don’t have the time to watch it, we had a chat about writing, learning, personalized education, and a number of other things that I think you’ll enjoy when you get the chance to watch it – go and have a look if you’d like to hear some thoughts on that!
Speaking of students, one thing I’m looking forward to but didn’t actually imagine would happen will probably happen tomorrow; we’re not there, but soon!
I’ll have lots more to write in the day ahead, but I’d like to first start by getting a little bit more sleep, being better to myself, and preparing myself a little better by resting up for the day ahead 🙂 Till then!
The other day, I used OpenAI’s new Whisper algorithm for the first time…
…Only to realize something very, very strange.
If you’ve not heard about Whisper, it’s OpenAI’s automatic speech recognition (ASR) system, and it’s significantly more accurate compared to something like Siri, which I usually use, or other kinds of technologies.
Anyway, for an upcoming Medium piece, I chose to narrate everything into the voice memo app on my iPhone, in preparation to have it transcribed by Whisper, while I was on a drive from my home to my cello lesson about 15 minutes away.
Whisper did it *almost* perfectly!
Then, I uploaded this into ChatGPT just to format it but didn’t change any of the text.
This was perfect! It was well transcribed, everything looked good, and all that remained was for me to just post the thing, right?
…No.
Because you see, at this point, I started to wonder about a strange question that was starting to brew in my mind:
Was this text AI generated or was it human generated?
I asked several people this question, and almost all of them, said the same thing – it was AI assisted, but the text itself was primarily generated by a human.
To me, that makes a lot of sense because I recited it from my voice, and it wouldn’t make sense if it were considered to be AI generated unless what’s inside my head is not in fact a human brain, but rather some sort of super computer.
So I decided to just check with GPTZero just to be sure.
Here’s what GPTZero had to say, as it casually marked the parts of the my essay that it thought were AI-generated in a bright yellow.
I was a little shook.
Essentially, the majority of the text was determined to have been generated by AI.
At first, I thought about a couple of different possibilities. I wonder to myself – was it because I had put the text through ChatGPT? Could it have been that the text had been watermarked or modified in some way that allowed GPTZero to determine that it had been generated by AI?
That didn’t seem to make sense, particularly since OpenAI has not yet implemented watermarks in text . Still, the text definitely wasn’t modified in any way apart from the paragraphing or anything else of that nature.
Therefore, what was the only logical possibility here?
The only possibility is that according to GPTZero, I sounded like an AI.
This made me think quite a fair bit and it just so happened that in the gym I ran into my friend Sandy Clarke, with whom I ended up discussing the matter, (Sandy is wonderful and incredibly humble relative to what he’s achieved – check him out here!) and who suggested that perhaps artificial intelligence speech is just speech of a formal nature and to consider the speeches of JFK and Obama, so I decided to go right ahead and input JFK’s inaugural speech into GPTZero:
… So did this mean that John F Kennedy and Obama were both advanced forms of artificial intelligence sent to planet earth to rule over the most advanced societies in the world, over which no normal human could have presided?
It would be funny if that were true.
At this point, I started to realize that the way that I spoke was just similar to the way that these people speak, which was similar to what GPTZero was identifying as AI-generated.
That’s not all that surprising, since my job is to help students learn how to write effectively, to assist them with their grammar, and also with the way that they use the English language.
But it made me start to wonder – when we interact with artificial intelligence, it’s a new type of interaction where we’re essentially just conversing with and reading from a tool that we are constantly interacting with all the time. Is it all that surprising that that could lead to language change on our parts, and therefore a shift in the way that we think and communicate?
It’s not necessarily going to be the case that humans end up fusing with machine parts, as some movies seem to suggest that we will, but certainly there are going to be changes in our culture as a result of the way that we interact with technology that perhaps aren’t immediately apparent at the outset; what are those changes going to be? It’s not immediately clear what the answer to those questions are.
It was definitely funny to think about this, though, because it leads them into all sorts of interesting questions about sentience, and also about the people that we communicate with – what if the people around us end up adopting artificial intelligence language patterns to the point that we are unable to distinguish the language that is used by artificial intelligence from the language that is used by human beings?
That might be one of the ways in which we become more machinelike as a species, or perhaps not — either way, it was pretty interesting to watch this happen and to ask myself about the ways in which I am being influenced by AI, because we often think of humans and AI as being distinct and different from one another, and that there are clear boundary lines that separate us…
But how are those boundary lines changing over time? The answer to that question is unclear to me.
Yep.
As we interact with AI, I suppose that we will start to talk a little bit more like AI.
As we move forward in this world, I expect that AI detectors will not really be a meaningful way in which we detect human beings — that our natural instincts of judgment and distinction may become just a little bit finer as we go through life.
On my part, I find it kind of funny that maybe the people reading this piece might think rightfully that I am an AI — a sentient AI, maybe — but an AI for all intents and purposes.
To that I say… Who knows if that could happen to you too?
Today, it exploded and hit 1k, and it shows no sign of stopping; the hype is real!!
Why?
A mixture of Manglish, an anonymous dude who prompted OpenAI’s ChatGPT with the legendary Manglish prompt, the combined efforts of Kenneth Yu Kern San and Jornes Sim, and, I guess, a huge, huge dose of the wonders of ChatGPT x)
My own small contribution (lol!)
I guess I’ve evolved into the comments guy – take a look here if you want 🙂
Ah, what can I say?
It’s truly fascinating to be at the beginning stage of a revolution.
First off, congrats and creds to Pang for being infinitely better than I am at managing communities at the moment – something I’m definitely looking forward to learning more in the days ahead!!
Second off, I care a lot about the deeper significance of things – and I’m incredibly glad that this is one of the many things that’s starting off AI on the right foot in Malaysia, my home country – where this will go and what will happen I have no idea, but really look forward to watching what the world’s going to bring 🙂
Amongst other things, I guess it’s brought a ChatGPT Plus subscription for which now, in order to sign up, you’ve got to join a waitlist.
Can’t wait to see what’s next, and thank you based Sam Altman and OpenAI for creating this gift to humanity 😛
“Upload your essay into Turnitin by 11:59pm on Thursday night? You meant start the essay at 11:57 then submit it at 11:58, am I right?” — Gigachad ChatGPT student.
We begin our discussion by discussing the sweet smell of plagiarism.
It wafts in the air as educators run around like headless chickens, looking here, looking there as they flip through oddly good essays with panicked expressions.
“Was this AI-generated, Bobby?!” says a hapless teacher, staring at a piece of paper that seems curiously bereft of grammatical errors, suspecting that Bobby could never have created something of this caliber.
“No teacher, I just became smart!” Bobby cries, running off into the sunset because he is sad, he is going to become a member of an emo boyband, and he doesn’t want to admit that he generated his homework with ChatGPT.
generated by Midjourney, if that wasn’t obvious
This smell casts fear and trepidation over every single part of our education system, for it threatens to break it; after all, education is special and it is meant to be sacrosanct — after all, is it not the very same system that is designed to teach humans facts and knowledge and above all, to communicate and collaborate to solve the problems of our era with intelligence, initiative, and drive?
It’s unsurprising that the world of education has flipped out over ChatGPT, because artificial intelligence opens up the very real possibility that schools may be unable to detect it.
Fun and games, right? It’s just a bunch of kids cheating on assignments with artificial intelligence? It’s not going to affect the older generation?
As it turns out, no — that’s not the case. I’ll explain why later.
But before that, let’s talk a bit about the part of our education system that AI is threatening: Essay-writing.
If students simply choose to let their work be completed by artificial intelligence and forget all else, that just means that they’ve forgone the education that they’re supposed to have received, thereby crippling them by an act of personal choice, right…?
But each of us has been a student, and if we have children, our children either will be or have been students too; there is a deep emotional connection that stretches across the entire world when it comes to this.
Therefore, when Princeton University CS student Edward Tian swooped in to offer a solution,it’s not all that surprising that the world flipped.
Enter GPTZero.
Humans deserve the truth. A noble statement and a very bold one for a plagiarism detector, but something that’s a little deeper than most of us would probably imagine.
But consider this.
Not everyone who uses AI text is cheating in the sense of doing something that they are not supposed to and thereby violating rules, therefore the word ‘plagiarism detector’ doesn’t quite or always apply here.
This algorithm, as with other algorithms that attempt to detect AI-generated text, is not just a plagiarism detector that merely serves to catch students in petty acts of cheating —it is an AI detector.
An AI Detector At Work.
So how does it work?
GPTZero assigns a likelihood that a particular text is generated by AI by using two measures:
Perplexity, and Burstiness.
Essentially, in more human language than that which was presented on GPTZero’s website, GPTZero says that…
The less random the text (its ‘perplexity’), the more likely it was generated by an AI.
The less that randomness changes throughout time (its ‘burstiness’), the more likely that the text was generated by AI.
Anyway, GPTZero gives each text a score for perplexity and burstiness, and from there, outputs a probability that given sentences of a text were generated by AI, highlighting the relevant sentences, and easily displays the result to the user.
Alright, sounds great!
Does GPTZero deserve the hype, though?
…Does this actually work?
Let’s try it with this pleasant and AI-generated text that is exactly about the importance of hype (lol).
That’s 100% AI-generated and we know that as fact.
…Would we know if we didn’t see it in the ChatGPT terminal window, though?
…Okay, let’s not think about that.
Down the hatch…
…And boom.
As we can see, GPTZero, humanity’s champion, managed to identify that the text that we had generated was written by AI.
So nope, GPTZero can’t detect rewritten texts that were generated with AI — which it should be able to if it truly is an *AI* detector in the best sense — and which in turn suggests that the way that it’s been operationalized has yet to allow it to be the bastion protecting humanity from the incursion of robots into our lives.
It’s not that GPTZero — or even OpenAI’s own AI Text Reviewer, amongst a whole panoply of different AI detectors – are bad or poorly operationalized, by any means. Rather, it’s that the operationalization is supremely difficult because the task is punishingly hard, and that we are unlikely to have a tool that can detect AI-generated text 100% unless we perform watermarking (MIT Technology Review) and we would have to use multiple algorithms to be able to detect text, or come up with alternate measures to do so.
An Arms Race between AI Large Language Models (LLMS) and AI Detectors — and why you should care (even if you’re not a student).
As I’ve mentioned, there is an arms race at hand between AI Large Language Models (LLMs) like ChatGPT, and AI detectors like GPTZero, the consequence of which is likely that the two will compete with one another and each will make progress in its own way, progressing the direction of both technologies forward.
Personally, I think that AI detectors are fighting a losing battle against LLMs for many reasons, but let me not put the cart before the horse — it is a battle to watch, not to predict the outcome of before it’s even begun.
Implications of this strange war:
But why should you care about any of this if you’re not a student? It’s not like you’re going to be looking at essays constantly, right?
Let’s take aside the fact that you’re reading a blog post right now, and let’s also move away purely from the plagiarized essay bit that we’ve been thinking about, as we gravitate towards thinking about how ChatGPT is alanguage model.
It’s a good bet that you use language everywhere in your life, business, and relationships with other people in order to communicate, coordinate, and everything else.
When we go around on the Internet, it’s not always immediately evident what was AI generated, what was generated by a human or, for that matter, what was inspired by an AI and later followed through by a human.
The whole reason we need something like a plagiarism detector is that we may not even be sure that a particular piece of language (which we most often experience in the form of text) is AI-generated with our own eyes and minds, to the point that we need to literally rely upon statistical patterns in order to evaluate some thing that we are looking at directly in front of us, thereby recruiting our brains as we evaluate the entirety of an output.
The problem is…
Language doesn’t just exist as text.
Language exists as text, yes, but also as speech. Moreover, speech and text are easily convertible to one another — and we know very well what ChatGPT is doing: Generating text.
We now know that there are Text To Speech (TTS) models that generate speech from text. They’re not necessarily all great, but that’s besides the point — it presages the translation from text into voice.
Think about it.
If the voices that are generated by AI become sufficiently realistic-sounding and their intonations (VALL-E, is that you?), how might you know that these voices aren’t real unless there are severe model safeguards that impede the models from functioning as they are supposed to?
Now combine that indistinguishable voice with sophisticated ChatGPT output that can evade any AI detector and in turn may, depending on the features that end up developing, evade your own capacity to tell whether you are even interacting with a human or not.
How would that play out in the metaverse?
How would that play out in the real world, over the phone?
How would you ever know whether anyone that you’re interacting with is real or not? Whether they are sentient?
The battle between AI and AI Detectors is not just a battle over the difference between an A grade and a C grade.
It’s a battle over a future where what’s at stake is identifying what even qualifies as human.