Skip to Main Content

Artificial Intelligence (LLM)

Just like a conversation with a stranger, be mindful when chatting with an AI tool. 

Consider, what information is the tool asking me for? What information am I willing to share?

AI tools thrive and grow on data but you don't have to needlessly give away your data for free!

  • Be careful and selective about what you choose to share.
  • When give an option to 'Skip' giving personal details, choose to 'Skip'!
  • Think twice about using a tool that requires you to log in or link your personal email account to a tool. Linking a tool to your personal email gives the AI company all sorts of personal information about who you are and how you use your device! Look for and use tools that don't require you to link your email account. 

When evaluating information from a chatbot, use your critical thinking skills. Did you know that AI tools 'hallucinate' and "generate false or nonsensical information...[AI tools] "can be be very confident liars."

"A New York Times report last year found that the rate of hallucinations for AI systems was about 5% for Meta, up to 8% for Anthropic, 3% for OpenAI, and up to 27% for Google PaLM."


Why do AI tools hallucinate?

"Chatbots “hallucinate” when they don’t have the necessary training data to answer a question, but still generate a response that looks like a fact. Hallucinations can be caused by different factors such as inaccurate or biased training data and overfitting, which is when an algorithm can’t make predictions or conclusions from other data than what it was trained on."

Source: Bratton, Laura. "Salesforce says its AI chatbot won't hallucinate – well, probably." Quartz, April 25, 2024. https://qz.com/salesforce-einstein-copilot-hallucinations-1851436568. Accessed October 3, 2024.