AI Literacy: What AI Can and Cannot Do
Jump to section
Why AI literacy matters
In 2026, AI is everywhere — in emails, search engines, presentations, analytics work. But most people use AI either with unrealistic expectations ('AI can do everything') or excessive fear ('AI will replace me'). Both extremes lead to bad decisions. AI literacy is the ability to understand what AI can do, where it fails, and how to use it effectively.
What AI actually is
AI models like ChatGPT, Claude, or Gemini are text prediction systems. Based on your input, they predict what text should follow. They are trained on enormous amounts of text from the internet, books, and documents. This determines both their strengths and weaknesses.
What AI does well
- Summarization: condensing long text into key points
- Text generation: emails, reports, proposals, documentation
- Analysis: recognizing patterns in data and text
- Translation and paraphrasing: reformulating text for different audiences
- Brainstorming: generating ideas and alternatives
- Structuring: organizing unstructured information into logical frameworks
- Coding: generating, reviewing, and debugging code
Where AI fails
- Factual accuracy: AI can confidently state something incorrect (hallucination)
- Math and precise calculations: surprisingly unreliable on complex calculations
- Recency: training data has a cutoff, does not know the latest events
- Personal context: does not know what you want unless you tell it
- Ethical decision-making: AI has no moral compass, only statistical patterns
- Original creation: creates combinations of existing material, not genuinely new ideas
The most important rule of AI literacy: AI is not an expert, it is a very capable assistant. It does not advise from experience — it predicts what an experienced expert would likely say. That is a huge difference.
How to read AI outputs critically
AI outputs look convincing — fluent language, structured arguments, confident tone. That does not mean they are correct. Three rules for critical reading:
- Verify facts: any specific numbers, dates, quotes, or claims should be verified in primary sources
- Look for logic gaps: AI can write a convincing argument with flawed logic
- Question certainty: when AI says 'it is well known that...' or 'studies show...', ask — what studies? how does it know?
AI myths that must die
Myth 1: 'AI understands what I am saying.' It does not. It predicts the statistically most likely response based on patterns in training data.
Myth 2: 'AI is objective.' It is not. AI reflects biases in training data. If the training data favors a certain viewpoint, AI will reproduce it.
Myth 3: 'AI will replace me.' Probably not. AI replaces specific tasks, not entire roles. People who learn to use AI effectively will become more valuable, not less.
The best way to improve your AI literacy is to experiment. Use AI for different tasks, deliberately test its limits, and note where it surprises you — both positively and negatively.
When you are unsure whether an AI response is accurate, try asking the same question to a different AI model (e.g., Claude vs. ChatGPT vs. Gemini). If they disagree, that is a strong signal you need to verify independently.
Give AI the following tasks and observe where it succeeds and where it fails: 1. Ask about an event from the last 24 hours — can it answer? 2. Request a complex math calculation (e.g., 17^3 + 289 / 17) 3. Ask for a restaurant recommendation in your city — how accurate is it? 4. Ask for a summary of a long article — how good is the summary? 5. Ask for 'facts' about a fictional company — will AI hallucinate? For each task, record: (a) how convincing the answer looked, (b) whether it was correct, (c) what you learned about AI's limits.
Hint
The fictional company is the most interesting test — AI will likely generate convincing but completely fabricated information. This is the core of the hallucination problem.
Pick a topic you know well (your profession, hobby, or area of study). Ask the same 3 questions to at least 2 different AI models (e.g., ChatGPT, Claude, Gemini). 1. Write down the 3 questions before you start 2. Ask each model separately — do not share the other model's response 3. Compare: where do the answers agree? Where do they differ? 4. As a domain expert, which model was more accurate? More nuanced? 5. Record 2-3 insights about how different models handle the same topic
Hint
Differences between models often reveal areas where training data varies or where the topic has genuine ambiguity. Agreement between models does not guarantee correctness, but disagreement is a strong signal for further investigation.
Ask AI to write a short biography (5-8 sentences) of a real but relatively obscure person in your field — someone who is not world-famous but has made contributions. 1. Read the biography carefully — does everything look correct? 2. Verify each factual claim: dates, positions, achievements, publications 3. Count how many facts are correct vs. fabricated vs. partially correct 4. Ask AI to provide sources for its claims — does it cite real sources? 5. Write a corrected version of the biography using verified information
Hint
AI tends to be more accurate about very famous people and less accurate about less well-known figures. The less famous the person, the more likely AI will fill gaps with plausible-sounding but fabricated details.
- AI predicts text based on patterns — it does not understand, has no experience, has no moral compass
- Strengths: summarization, text generation, analysis, brainstorming, coding
- Weaknesses: factual accuracy, math, recency, originality
- Always verify facts, look for logic gaps, question certainty
- AI is an assistant, not an expert — use it accordingly