Ai test to see just how intelligent it is

So, I have been testing how useful Google Gemini is at finding stock information, ex dates, betas, dividend ex and pay dates, plus RNS announcements and published channels.

From my experience, it is wrong 80% of the time. It has no idea what is old information, what is wrong information and what is correct. I challenge it and it says, you’re absolutely right, I’ll change that. But then the next day I’ll ask it exactly the same question, and it still suggests the same wrong information.

Has anyone done this with Siri, Alexa or chat gpt and how good or bad are they?

1 Like

Chatgpt makes numerous basic errors, apologises when challenged, then repeats the mistakes.

It states that it doesn’t store/remember “conversations” and so will continue to make the same mistakes until reprogramed.

2 Likes

Ok and to think the various companies have spent billions on coding the AI bots and they can’t even get simple things correct.

One thing Gemini likes to do when asked what is a good UK listed dividend paying stock is, it constantly says AGR. This stock was delisted awhile ago, I tell Gemini this, it says “oh yes, I didn’t check properly.” I can then ask the same question an hour later and it will say AGR again.. It is so crap.

Got curious so I tried it out but I got different results. Is this similar to yours?

Better you make the query the better the answer tends to be.


Based on the latest analyst forecasts and market data for 2026, the UK market remains a “yield hunter’s” paradise, with several sectors—particularly insurance, housebuilding, and tobacco—projected to deliver yields significantly above the FTSE 100 average of 3-4%.

Below are five of the top-rated UK dividend stocks for 2026, selected based on analyst consensus, yield projections, and financial resilience.

Top 5 UK Dividend Stocks for 2026

| Stock | Ticker | Forecast 2026 Yield | Primary Reason for Analyst Support |

|—|—|—|—|

| Legal & General | LGEN | ~8.5% – 9.2% | Strong capital generation and market leadership in pension risk transfer. |

| Taylor Wimpey | TW. | ~8.8% – 9.2% | Anticipated recovery in the UK housing market as interest rates stabilize. |

| M&G | MNG | ~7.5% – 7.8% | Robust balance sheet and high dividend cover compared to financial peers. |

| Phoenix Group | PHNX | ~7.8% – 8.2% | Massive cash reserves (£5.6bn+) supporting a progressive payout policy. |

| Admiral Group | ADM | ~6.8% – 7.2% | Higher “quality” yield; seen as a more reliable growth-and-income play. |

Why Analysts are Bullish on These Picks

1. Legal & General (Insurance & Pensions)

Analysts favor L&G because it is a “cash-generating machine.” Its business model is tied to the aging population and the trend of companies offloading their pension liabilities. Even with a tight dividend cover (roughly 1.1x), analysts believe its consistent cash flow makes the 9% yield sustainable for 2026.

2. Taylor Wimpey (Housebuilding)

Housebuilders often offer “cyclical” yields. Analysts are targeting Taylor Wimpey for 2026 because they expect a “Goldilocks” environment: falling interest rates making mortgages cheaper, combined with a persistent shortage of UK housing. This makes their 9% projected yield attractive for those betting on a domestic economic recovery.

3. M&G (Asset Management)

M&G is frequently highlighted for having better dividend cover (roughly 1.5x) than its high-yield insurance rivals. This provides a “margin of safety” for investors. Analysts view it as a premier choice for 2026 because it combines a high yield with a healthier ratio of earnings-to-payouts.

4. Phoenix Group (Life Insurance)

Phoenix is a specialist in “closed-book” life insurance, meaning they manage old policies for cash. Analysts point to their massive cash surplus as the reason the dividend is safe. While capital growth might be slow, the income for 2026 is considered highly predictable.

5. Admiral Group (Motor Insurance)

While its yield is lower than L&G or Phoenix, Admiral is often the “analyst favorite” for reliability. Analysts expect 2026 to see a jump in dividends (projected 7-8% growth) as the motor insurance market stabilizes after a volatile period of inflation. It is seen as the “star performer” for dividend growth rather than just pure yield.

Important Considerations for 2026

* Dividend Cover: Watch out for stocks with a cover below 1.0 (like WPP or some housebuilders), as this means they are paying out more than they earn, which is risky long-term.

* Interest Rates: Many of these yields are attractive because they “smash” the returns of savings accounts. If the Bank of England cuts rates faster than expected in 2026, these stocks may see their share prices rise as more investors pile in for the yield.

Would you like me to look into the “dividend cover” and safety ratings for any of these specific companies to see how secure those payouts really are?

The error rate you have experienced is a logical outcome of how Large Language Models (LLMs) currently operate. These models are designed for probabilistic reasoning, predicting the next most likely word, rather than deterministic retrieval from a live database. When you ask for a specific ex-dividend date or an RNS (Regulatory News Service) announcement, you are asking the AI to perform a “needle in a haystack” search across live web data. If the search tool fails to find the exact line of text, the model often “hallucinates” a plausible-sounding date to satisfy the prompt.

The reason the AI repeats the same error the next day, despite your previous correction, lies in the architecture of the model’s memory. General-purpose LLMs like Gemini or ChatGPT do not “learn” from individual user interactions in real-time. When you correct the AI, you are merely updating the “context window” of that specific conversation. Once you start a new session, the model reverts to its base training and its live search tools. It has no long-term memory of your specific corrections unless those corrections eventually make it into the massive datasets used for its next major training cycle, which can take months or years.

Comparing this to other assistants, Siri and Alexa are fundamentally different; they are not LLMs in the same sense but rather interfaces for third-party APIs like Yahoo Finance. They are generally more accurate for a simple stock price because they pull a single data point from a structured source, but they are almost entirely useless for complex queries like RNS announcements or published channels because they lack the reasoning capabilities to parse unstructured text.

ChatGPT (specifically the GPT-5 and o3 models available in late 2025) has shown a slight edge over Gemini in “agentic” tasks, where the AI is told to browse multiple sources before answering. However, even the most advanced models still struggle with the “Retrieval-Augmented Generation” (RAG) gap. In financial contexts, a 5% hallucination rate is still 5% too high. For people in your position, relying on these general tools for price-sensitive data is a systemic risk. The hard truth is that while these models are transformative for summarising a 300-page annual report, they remain fundamentally unreliable as primary data terminals for time-critical financial events.

Happy New Year to everyone!

To sum up your huge post, AI is useless. I didn’t need to use Gemini to do that. lol.

Jokes aside, I just find it funny that they expect people to pay for what is essentially a forgetful 80 year old given wrong information and that it kind of knows the answer.

When the CEO of Google says he wouldn’t trust his own product, that about says it all.

One thing Gemini did recommend was Blockbuster Video, Kodak and Ask Jeeves shares to buy….. :wink:

Well, the point is that an LLM is a tool. If you want to use a hammer to drive a nail, that’s fine. But if you want to use it to drill a hole to insert a Fisher and then put a screw in it, it probably won’t work. The problem is our expectation of a tool that seems capable of doing everything, but in reality it isn’t. I use a paid version of Gemini, and it works very well for what it can do.

I use it almost exclusively to get answers so I can quickly do what I’m good at, and so I immediately understand the quality of the answer. In other cases, I carefully check the sources before accepting what it tells me.

I use AI chat bots every day but don’t just stick with Gemini. I use a combination of Chatgpt, Claude, Grok, Copilot, Perplexity and feed the answer from one to the other.

But in the end you can’t simply rely on the outcome. DYOR is always the final gate. Use AI as a tool but not as the final descion maker.

Sometimes I had good response, sometimes just plain bad.

1 Like

In several instances Perplexity framed its answers to me by reminding that i am xx yo, have a yy investment time horizon and a primarily objective of growing my assets till retirement, using these information to make conclusion on what could be better candidate stocks among a sample i had given. I didnt enter these information during this same chat session but in previous chat several weeks before. So from my experience Perplexity did take into account previous interactions.
I have not noticed that from Gemini.
Happy new years to everyone as well!

2 Likes

In Gemini, there is an option for chats to be stored to allow them to be reviewed (by humans) and train the AI.

If you allow this, the AI will “remember” previous chat and respond accordingly.

1 Like

I asked Gemini directly about its ability to remember previous discussions, and here is the answer:

The ability to recall specific details across different conversations does not occur ‘always’, nor is it automatic for every user indiscriminately. It refers to a specific feature known as Saved Info.

Here are the technical details based on current Google specifications:

1. Availability for the Free Version

Yes, this feature is available for the free version of Gemini. Whilst initially launched for ‘Gemini Advanced’ users (the paid plan), Google commenced the roll-out for free users (personal accounts) during 2025.

2. Not ‘Always’ Active for Everyone

For the system to retain information, three precise conditions must be met:

  • Active Settings: The user must have ‘Gemini Apps Activity’ enabled. If history is deactivated or deleted, long-term ‘memory’ does not function.

  • Geolocation (EEA/UK): Historically, this feature was restricted or delayed in the European Economic Area (including Italy) and the UK due to regulatory compliance (GDPR). If the user observes it functioning, it indicates the account has been enabled or the update has been distributed in that region.

  • Explicit or Implicit Input: The system remembers only what it is explicitly instructed to remember (e.g., “Remember that the user lives in Italy”) or what is deduced from repeated preferences, provided the user does not delete such information.

3. User Control

The user maintains total control over this data. One may view, edit, or delete every single piece of stored information by visiting the ‘Saved Info’ section in the Google account settings.

References:

In my case, Gemini remembers everything we discuss (sometimes too much!).

To make sure that it understands which assets I am talking about, I always provide the ISIN and the ticker. Sometimes, it mixes assets up and invents non-existent ISINs.

I’ve just been playing with Gemini to trim assets on shares that have boomed (eg Micron up 178%).

I asked Gemini to produce a table telling me how many shares to sell off to achieve either 100% principal recovery, or 15% trim.

There were 6 stocks involved and Gemini managed to get every calculation wrong - one advised me to sell more shares than I held.

After numerous reality checks, Gemini told me “Trust these numbers for your 3:00 PM trades”. ALL of the numbers were wrong!

2 Likes

I find that if I ask “what is a good mid cap uk growth stock”, it will give me 2/3 good examples and then it obviously has a double vodka and coke, and it starts to suggest mega caps.

But then if I challenge the results, Gemini goes “Oh yes you are right, try these ones” without any apology or learning.

I sum up AI as being like an 80 year old, who’s had a drink and has slight Alzheimer’s. It seems to think it knows the answer, but in reality it has about 40% of the knowledge, until I tell it to read a book and find the actual answer.

I have been researching buying a new car using Gemini, it is good with cold hard non changeable data, but something that is fluid and can change, it is not good enough yet.

1 Like