Pearson et al - education providers

Education providers massively down as a result of chatgpt threat. This should be a big concern to our educational systems, many kids don’t even want to go to school and now the few that manage to do so are getting hooked on AI assistance.

If this trend continues, what sort of doctors, lawyers and scientist are we going to produce in the next 60-100 years beyond.

We are heading for troubles and typical of the the regulators they remain silent

1 Like

It is the education system as a whole that is the issue not AI per se.

And yes the days of Pearson etc. are numbered imo.

Not necessarily. The data on chatGTP is not specifically verified. That and all AI information should be validated / have an overarching human to review and sign off any work.

What it could potentially do, if told to use a specific source, say a course handbook written by Pearson or similar, to enhance the learning experience. Generate unique mock tests and papers. It could be a huge value add.

Imagine an AI that had the material, could analyse the questions you get wrong and adjust your ‘learning material’ to improve your scores?

It requires some out the box thinking on education.

The sarcastic answer (not sarcasm aimed at you - rather aimed at the whole situation) is that it’s not a problem because we don’t need doctors, lawyers, scientists… the AI will simply do it all for us while humans do…

Welcome to mass unemployment of the middle class


Exactly, I agree! Just don’t see these classic education providers doing it.

My biggest fear/worry is this:
The speed at which change is happening does not allow our societies to organically adapt and solve the pressing issues. As AI is progressing really fast towards general AI the time to deal with these issues is running through our fingers as we speak.
In not so distant future AI will progress at uncatchable speed…
I remember the first Nokia phone 3210 my mother purchased. I still amazed how incredibly fast tech has advanced since then. ‘Progress’ will get faster until we can’t stear anymore where we are heading. It’s like driving a car with 180km/h in the city. You can’t.

There are multiple problems. The tech is progressing alarming quickly but at present it has the impression of intelligence but actually isn’t that intelligent but what will have the impact is the application/use of the tech. It potentially is highly disruptive in terms of the jobs market and employment and politicians (and society) are simply not ready or able to cope with that. Ignoring the poorest in society is standard stuff for most politicians but lawyers, doctors, etc. getting replaced by tech is a different matter. That said maybe we should just start with the politicians and replace them with chatGPT (lol). There are also the sinister uses of the tech - crime, fraud, war… The reality is pretty scary

I did see that the boss at Chegg said that they had already had discussions with OpenAI about using ChatGPT in their services and products so Chegg were identifying both a risk and an opportunity but the market has just reacted with panic but if they rename the company CheggAI it will suddenly become a must have stock


I haven’t fully analysed the Chegg news but yes usually markets overreact which can indicate good times to buy. Not that I’m saying buy Chegg, but its the kind of reaction that if it was on my watchlist, I would be topping up if the fundamental business model was still sound.

@Dougal1984 I had never heard of the company before today (I did have PSON on a watchlist but wasn’t holding). I have only quickly skimmed through some info. Without doing lots of research I did see comment that the earnings were positive but selling study material to students does seem a vulnerable market so one the hand I can see some justification to market panic but at this time it remains a good business just with increased risk. However there is fundamental question what OpenAI is going to do with its tech. It has both not for profit and commercial sides to the business. If, for example, the only way students could access the tech was through companies such as Chegg integrating it into their services/products then Chegg would be golden boys on the stock market but if OpenAI is going to give the world free access to the tech then a lot of companies are vulnerable. A big problem is most politicians have no idea. The EU has published draft legislation/regulation for AI but it is highly disruptive tech. Chegg is an interesting first wave example of its impact and its either doom or opportunity but in part that depends on the plans of big software companies and OpenAI (and how they are going to use and provide access to it)

I think this is an interesting interview that puts some context on it. Yes businesses have risks but there are also opportunities here and very few people actually understand what this tech is so the market over-reacts

Chegg CEO: ChatGPT stock plunge was ‘extraordinarily overblown’ (

At this time, the ChatGPT isn’t really threatening anything. It’s just a novelty, a showcase.
At this point, I think Wikipedia is still better than ChatGPT as a source of information (I know the Wikipedia limitations, e.g. being easily editable).

  • Where are the references the ChatGPT is using?
  • Can we check their source of information?
  • It is lacking information and is a bit US/Anglo-Saxon biased.
  • Who controls/supervise their functions? And there is any regulation that they must follow?
  • It has a lot of wrong or incomplete information. Due to that, some person can mistake the information as all true and correct, just because some is correct. It’s like an erudite liar speech, that mixes some trues with some alternative data, or a written novel based on historic/real facts (e.g. Da Vinci Code).
  • The ChatGPT answers are sometimes dumb as a drunk person speech. :slight_smile: As their answers are linked to the previous ones in that session, even if you change the questions. It assumes that the questions are somehow linked.

Maybe for the most of people, it can produce better information (with all the errors, incompleteness and bias attached) than themselves. As most people are lazy and/or illiterate. And I suspect that kind of tech will encourage more the laziness and the illiteracy on most people. Examples, auto correct in cell phones and computers, translators, although maybe we know the correct words or how to translate something, it’s faster and no brain effort.

Just see for example, it took tens of thousands of years for Humans be able to fly (even taking in account the hot balloons), Humans created the first flying airplane about 100 years ago, but now, Humans are creating space vehicles and planning trips to Mars (after space probes to Mars and other places).

See the Moore’s Law: :wink:

Thanks for the information!