Okay, here's a start dealing first with the apparent dangers, and broaching the notion of intelligence. I'm too knackered to edit it, so it's more like rough notes.
Current AI capabilities are a fraction of a fraction of human intelligence, something I'll try to get ChatGPT itself to explain (I was just asking it questions and its answers were near enough, if understated). But we're not used to machines simulating natural language, so it looks impressive, as do robots with realistic movement. That doesn't mean it's intelligent in the way humans are intelligent. It's still awesome, though, so count me as both a fan and a sceptic.
I think the best place to start is by comparing AI to dangerous machines, to get that out of the way.
We have long had dangerous machines, from guns to warheads and viruses, including sufficient nuclear arms to eliminate ourselves. The nuclear risk is already immeasurably greater than anything AI can bring to the table now, and as I will argue, in the future.
That's not because AI can't be used to create something dangerous, such as an army of lunatic droids. But doing that is much harder than keeping nuclear weapons out of the hands of madmen. And the trajectory of development means there is ample time to constrain the dangerous use of AI as new threats arise, identifying risk areas, limiting and controlling risk, etc.
And, as with nuclear hardware, the footprint is larger than just an algorithm because that algorithm has to interface with extraordinarily expensive and specialised hardware to be dangerous. Again, that buys time and leaves a very traceable and trackable footprint.
So, I think the best way to put that question aside is to recognise nuclear weapons are already far, far more dangerous, and we control them successfuly (albeit not well enough in my view).
Arguably, though, cybersecurity risk is probably a better example of a dangerous machine (or programme, specifically). We are struggling to contain it, but we do counter it by constantly staying ahead of it, crowdsharing data and crowdsourcing solutions and testing. And we can definitely get much tougher on it if we want to but don't see the cost as high enough to get draconian yet. At some point, no doubt when insurance becomes impractical, I do think we will get tougher on it.
I don't see AI risk much differently, because the very same process can be used to stay ahead of AI algorithms, and once again we can get more draconian if we want.
Some of the fear, though, comes from a misunderstanding of what AI is. It isn't intelligence, at least in the sense of human intelligence. It is approximating and simulating expected answers through high-powered statistical search and structuring algorithms that learn from patterns stored in memory. And because AI isn't like human intelligence (to be explained more below), it isn't like human learning. Its learning is really just pattern matching and statistical expectation, which 'learns' by matching and guessing progressively better. That's nothing like the human learning engine, even though we also do statistics (and hence encode them in the algorithm itself).
A lot of the fear also comes from AI hype. The greater the fear, the greater the intelligence, right? So, of course the creators of large language models like ChatGPT ham that up miles beyond reality because it makes the algorithm seem even more impressive. So, they're loving all this fear and concern because it's validating.
So for me, no fear whatsoever. Humans adapt pretty quickly and effectively to shifts like this and always have.
Of course, no fear doesn't mean no problems. E.g., learners can cheat and AI embedded into products to automate them can be biased or learn to be biased. So sure, you need to control those things, but current proposals I've seen look reasonable. (E.g., making ChatGPT identify its sources. Bard does this somewhat, but not sufficiently. But it wouldn't be very hard). Anti-bias is easy enough to counter through transparency, contextual warnings and other tactics that have already been developed and discussed extensively.
And it's not just talk; the legislation is already coming thick and fast. The UK has bills in process already. This one on protecting work and employment from biased AI is a good example:
https://bills.parliament.uk/bills/3464).
To my mind, the education problem isn't hard, although school education is not my expertise, so Cam I will defer to you here!
But I'd make kids do more work in class, and create sterile PCs and work processes that demonstrate ChatGPT wasn't used, etc. Easy at school, but it can be done at home. Creepy companies already do this with remote workers, so it can't be hard to find a sensible way to monitor kids, and might be a good thing given they already waste too much time on the web and their phones.
Plagiarism and copying were rife long before ChatGPT, while it's pretty easy to recognise the difference between someone's writing and ChatGPT, even if you manipulate it to write in a certain style. If a teacher knows the student and has time to actually read the text properly, it wouldn't be hard. The problem here is less to do with ChatGPT and more to do with existing problems in education, such as ever-shrinking intensive learning and engagement.
Sure, it's a challenge and a pain, but not really a difficult problem in and of itself to solve. You could even get kids to write more at school and map that work so you can algorithmically recognise their writing. The main issue there is privacy, so it would have to become like medical data, although it's already being harvested on the web from public data, while all kinds of medical conditions affect writing, so language mapping is already being done to help kids. There's a strong argument, in fact, that doing that would lead to auto-instructional language learning, much like automated typing lessons.
Again, to me they're very minor challenges. Education has far, far bigger problems already, being absurdly outdated, grossly under-funded, barely fit for purpose and interfered with by every second Karen parent and religious crazy. I won't go into that, but much like nuclear weapons are already infinitely more dangerous than AI, education has infinitely greater challenges than ChatGPT.
That's a quick take on those aspects.
You will sometimes see the term 'general intelligence' or artificial general intelligence (AGI). That's real intelligence, and AI is not even close to achieving it. But because people can't define intelligence, and explain what human intelligence entails, which is extremely complex, people mistakenly conflate current AI pattern matching and simulation with actual intelligence.
To me, the difference between the two is like the difference between pressing the cos button on a scientific calculator and understanding trigonometry. Any idiot can press the cos button. All the intelligence lies in the reason for pressing it, which has its own extensive system of reasoning, deductions, planning and motivations. Then you have to understand what the output means, store it in some useful and contextualised form for later, be able to modify or transform it as needed, apply it to the specific task you've already grasped and have in mind, assess its effectiveness or suitability, share the new outputs with others in a comprehensible form, etc.
So, unless you already understand trigonometry and bring that whole ensemble of knowledge, plans and logics to the task, pressing that button and viewing its output won't mean a damned thing. AI is pretty much like that; it can generate outputs, but unless you already know a hell of a lot about the output already, and a hell of a lot about similar things and related sub-sets and layers of knowledge, including of any humans, organisations or systems you engage in the process, you can't do much more with it other than cheat on a test or save time writing something.
Imagine you use ChatGPT to write a product brochure. You already have to know everything about the product to know if what ChatGPT said about it is correct or if it grasps the key ideas and implications. Using ChatGPT doesn't preclude needing that knowledge at all.
As mentioned, I will use ChatGPT itself to explain some of these issues in subsequent posts. It simply lacks multi-layered contextual awareness, knowledge matrices and purposeful human creative transformation capabilities. In the case of the product brochure, the text it creates might not be close to the way I want to represent the product to emphasise certain things, impress certain people, be catchy and memorable, connect the product to other trends, position the product in the marketplace, subtly compare it to rival product features and the language competitors use, allay fears by speaking to a sub-audience, assure regulatory bodies, avoid terms that trigger the wrong associations, hedge against misunderstanding, meet stakeholder and audience expectations, avoid legal jeopardy, discourage misuse or dangerous use, and on and on. And if ChatGPT can do some of that, great! That will save time and money. But you won't even know if its output is even vaguely useful if you don't already have an advanced knowledge of the product, tasks, fields, people, culture, organisations, legal system, etc. in which the product is embedded. And imagine the product is something complex, like an ERP solution, mobile phone or airplane. If all that knowledge and awareness is needed to write a brochure for something as simple as a bathroom tap or a sashimi knife, how much more is needed for highly complex products?
And I think that distinction, i.e., that it takes one to know one (it takes an expert to recognise that something is expert), is key here. It also probably offers many of the answers for education. So, when I ask AI something about human intelligence, which I might have wrestled with before (and repeatedly over decades in the case of 'meaning', 'understanding', 'knowing', 'sense-making', 'purpose' and 'learning', etc.), I can clearly see its limits. But anything to save time typing, especially given my health conditions which can make typing painful. Just knowing the right question to ask is 90% of the game.
People just haven't learned the scope of human intelligence well enough to grasp this topic intuitively or even approach it learnedly (meaning, with a thoughtful, logical plan and approach in mind based on careful testing, study and experience), let alone explain it (which is hard even for philosophers who've studied it for decades), because cognitive biology, cognitive science, psychology, philosophy, sociology, culture, geography, relationships and on and on are super complicated and exhaustive things to describe and explain, let alone understand. However, the brain processes all of the practice of those things near instantaneously, with a very high degree of mastery, without raising a sweat.