Reasons not to trust responses from one AI model (ChatGPT)

Reasons not to trust responses from one AI model (ChatGPT)

Here are five instances where the responses of ChatGPT (Open AI) cannot be trusted.

The first instance was a question about the 1967 Progressive Conservative Leadership convention, which ended John Diefenbaker’s leadership of the party. I asked ChatGPT (AI) for the first ballot results, as I recalled that Diefenbaker was humiliated by the paucity of his support.  However AI responded by reporting the results for the top five contenders, not mentioning Diefenbaker at all.  I mentioned Diefenbaker to the AI, and it ‘apologized’ for the response and then reported Diefenbaker’s sad results (2%).  In my view, the basic factual error was compounded by the AI’s lack of data on the context of the convention.  The whole purpose of the convention was to replace Diefenbaker as leader, so leaving his name off the reporting of first ballot results was an egregious error.

The second instance was the response to my questions about scandals during the Mulroney Government, ones serious enough to result in the resignation of a Cabinet Minister.  The AI identified six resignations.  I thought there were more, and so I asked about a specific name I remembered (Michel Coté).  At first the AI said there was no record of that person being a Minister in the Mulroney Government.  When I sent a link to the news story, the AI ‘apologized for the confusion’ and reported on the story.  I then asked about another Minister (John Fraser & Tuna Gate).  The AI denied that that Minister had had to resign.  When I mentioned Tuna Gate specifically, the AI referred to an entirely different scandal having nothing to do with tuna.  When I challenged again, the AI ‘apologized for the previous incorrect information’ and reported the story as I remembered it.

A third instance concerned whether a Constitutional Amendment would be required to change the Canadian electoral system from first-past-the-post to proportional representation  The AI claimed the ‘first past the post’ was defined in the Constitution Act  1867 (the BNA Act).  When I asked for the specific citation, the AI conceded that the Act does not specify any particular electoral system – a direct contradiction to its earlier claim.  However, the AI continued to assert that a constitutional amendment would be required (due to historical precedent from the UK Parliament).  However, this answer has been challenged in the courts and there is no settled law on this question.

A fourth instance referred to the origin of the phrase written by Robertson Davies: “When the orbs are gone, the sceptre is unavailing”.  The AI reported that the phrase was written in the novel “Fifth Business”, and gave a long citation that did not in fact include the phrase. When I pointed this out, the AI ‘apologized for the confusion’ and made a reference to the Davies novel “The Manticore”.  This was a ‘close but no cigar’ response, as the two novels are part of the same  trilogy.

The fifth instance was the response to a question about tennis rules concerning where a ball may be hit; I was specifically questioning whether an opponent could hit the ball before it crossed the net.  In the course of its answer, the AI made two errors: (1) it stated that the ball must land within the opponent’s court before it can be hit; (2) it stated that the racket must make contact with the ball below the player’s waist.  When I challenged the AI response, once again the AI ‘apologized for the oversight’ and corrected its response.

My conclusion is that AI has a large vocabulary of apologies but no guarantee of accuracy.

Leave a Reply

Your email address will not be published. Required fields are marked *

CAPTCHA ImageChange Image