From the video transcript:
"AI tools, especially generative AI such as chatGPT, don't have clear boundaries between creative and factual responses. Since AI is designed to provide what you ask for, it may give you madeāup information, known as hallucinations....
There are several things you can do to limit the impact of AI bias and misinformation.
- First and foremost is critical thinking. If you approach AI responses with a critical eye, you're less likely to be fooled. Ask yourself: How might existing biases have influenced this answer? What kind of information might be overemphasized or completely missing?
- AI–generated summaries may seem like they save you time, but they can get facts wrong or miss important details. Instead of relying on an article summary created by AI, read the abstract and skim the article to see how well they match.
- If your AI response comes with citations, check them to make sure that they say what the AI says they do.
- Compare information from multiple sources, including articles from library databases and the websites of respected organizations. Many search engines now automatically steer you to AI versions of their search results. Switch to the non–AI version of search, such as selecting the Web search tab instead of the AI default.
- When you want AI to provide you with accurate facts, keep your prompt simple and direct. Avoid providing a scenario, since that may encourage the AI to hallucinate misinformation to boost your argument...
- In your prompt, include the types of sources you want it to use. It's also a good idea to ask it to provide citations and links. Then be sure to check those citations."