Nowadays, the way people access information and solve problems may change. The advent of large language models allows us to ask AI questions and receive answers at any time.
Currently, these large language models operate on a question-and-answer basis—you must first pose a question before receiving an answer. Therefore, to truly unlock the value of AI, simply obtaining an answer is not enough—learning how to ask questions is the key.
Kevin Kelly once outlined 12 trends for future development in a talk, with the 11th being the importance of questioning. In an era when information is readily accessible, “good questions are more meaningful and valuable than perfect answers.”
Why Do We Need to Ask Questions?
We live in a noisy and chaotic world overflowing with information, diverse opinions, and blurred lines between truth and falsehood.
In such an environment, passively absorbing information is like a sponge that soaks up water indiscriminately—it gathers a vast amount of “water,” but cannot tell which is clean and which is contaminated, leading to many risks.
Large language models are designed primarily to produce fluent text, which means the information we receive isn’t always rigorously filtered or verified.
Online, many content providers—be they advertisers, politicians, or even some experts and media outlets—may have specific agendas. Their information might be biased, incomplete, or even deliberately misleading or false.
Moreover, given the extremely low threshold for publishing on the Internet, anyone can appear to be an “authority,” making it even more necessary and challenging to discern credible sources.
Our brains tend to engage in “fast thinking” (or System 1), favoring shortcuts and relying on available information and intuition. Although this saves time and effort, it often leads to mistakes. Additionally, our minds are prone to confirmation bias and belief perseverance—we tend to seek out and accept only the information that reinforces our existing beliefs, ignoring opposing views. Passive reception of information can trap us in our own echo chambers.
Active questioning, especially critical questioning, is a kind of “gold mining” mindset. It requires us to engage actively with information, filtering it and evaluating the quality of arguments (by examining the thesis, conclusions, reasons, evidence, assumptions, fallacies, and any omissions) in order to decide what is trustworthy.
Questioning forces us to engage in “slow thinking” (or System 2), prompting more in-depth and comprehensive analysis and evaluation, which ultimately supports wiser judgments and decisions.
On a higher level, relying solely on passive reception makes us vulnerable to becoming “intellectual slaves” to others’ ideas. Active questioning is the foundation for taking control of our own beliefs and conclusions, enabling us to think independently and more effectively counter those who try to persuade us.
Good questions stimulate thought and imagination far more than answers. By asking questions, we not only achieve a deeper understanding of the available information but also identify problems, challenge assumptions, and explore new possibilities—the birthplace of knowledge production and innovation.
Chizuko Ueno emphasizes that scholarship is not about rote memorization but about "learning and then questioning"—posing one’s own questions and seeking the answers.
Good Questions Are More Valuable Than Answers
Warren Berger, in his book How to Solve Problems by Asking Questions, clearly states that “questions drive thinking, while answers kill imagination” and that “good questions are more valuable than answers.”
A well-crafted, high-quality question sparks our curiosity and compels our brains to enter a mode of exploration, analysis, association, and evaluation (i.e., slow thinking or System 2). It opens the door to cognition, prompting us to probe causes, possibilities, and solutions.
In contrast, an answer—especially a readily accepted one—often signals the end or a pause of the thinking process. Once we obtain an answer that seems “correct” or “good enough,” we tend to stop further exploration and imagination.
Questions open up possibilities; answers close them.
In an age of information overload, finding an “answer” is relatively easy (for example, via search engines), but this often results in passive, sponge-like absorption rather than true understanding and internalization.
In contrast, posing and attempting to answer a question compels us to actively gather information, analyze evidence, evaluate arguments, and scrutinize assumptions—a process that is itself a form of deep learning and knowledge internalization.
As Chizuko Ueno highlights, the essence of doing research is “learning and then questioning”—becoming an “information producer” rather than merely a consumer.
In a rapidly changing world, yesterday’s “correct answer” might no longer apply today; clinging to outdated answers can lead to falling behind or even failure. The ability to continuously ask questions, especially predictive and exploratory ones, helps us remain in a state of learning, adapt to changes, foresee future trends, and continually adjust our cognitive and action strategies.
As noted in How to Solve Problems by Asking Questions, in today’s era where answers are at our fingertips, the ability to pose a "good question" has become a rare and highly valuable skill. It distinguishes passive information recipients from proactive thinkers, creators, and leaders.
The Mindset of the Questioner
In his book Learn to Ask, Brown discusses two types of thinking: sponge-like thinking and gold-mining thinking. Sponge-like thinking is a passive absorption of as much information as possible, like a sponge soaking up water, while gold-mining thinking is the active process of interacting with and filtering information, much like a prospector sifting through sand for nuggets.
The advantage of sponge-like thinking is that it quickly accumulates a vast amount of knowledge, laying the groundwork for complex thought. The drawback, however, is its lack of a filtering mechanism—making it difficult to judge the value and reliability of information, leading to wholesale acceptance of everything encountered.
Gold-mining thinking requires the reader or listener to continually ask questions, challenging the source, reasoning, evidence, and assumptions behind the information. The goal is to “mine” valuable nuggets (i.e., reliable conclusions and robust arguments) and discard the irrelevant details. This approach emphasizes interaction and evaluation throughout the process.
These two modes of thinking can complement each other—you need to absorb information (like a sponge) before you can filter and evaluate it (like gold mining).
Gold-mining thinking demands constant questioning and is inseparable from critical thinking. Learn to Ask distinguishes between strong and weak forms of critical thinking.
Weak critical thinking involves using critical thinking skills exclusively to defend one’s current beliefs. Its aim is not to pursue truth or virtue, but to resist or refute views that differ from one’s own, forcing the other party to concede.
This is a closed, defensive mindset.
Strong critical thinking, on the other hand, applies critical thinking skills to evaluate all claims and beliefs—especially one’s own. It requires us to scrutinize our viewpoints by the same standards and to welcome criticism of our beliefs, all in the pursuit of truth and better decision-making.
This open and reflective mindset is what Brown advocates.
To be an effective critical thinker, one must develop three dimensions: awareness, ability, and willingness.
- Awareness: Recognizing that there is an interconnected set of critical questions (as raised throughout this book).
- Ability: Being capable of posing and answering these critical questions appropriately when needed.
- Willingness: Having the proactive drive to apply these critical questions actively in evaluating information and arguments.
The underlying drive behind constant questioning is, undoubtedly, curiosity. In How to Solve Problems by Asking Questions, Berger suggests we should “ask questions like a 4-year-old.”
A four-year-old is a natural questioner, asking hundreds of questions every day. They do so purely out of curiosity, wanting to understand how the world works. They are unafraid of exposing their ignorance and boldly ask even the most basic or seemingly “silly” questions. Their minds remain open and free of bias.
As we grow older, however, the habit of asking questions often diminishes due to fear of making mistakes, the worry of appearing ignorant, or social pressures (like avoiding questions to save face). Cultivating a curiosity-driven mindset means making the effort to reclaim and maintain a childlike wonder and the courage to ask questions, overcoming these learned obstacles.
Berger also contrasts the mindset of a scout with that of a soldier.
- The soldier mindset is about defending one’s position (i.e., existing beliefs). When encountering challenges or differing information, it instinctively defends or refutes, caring more about being “right.”
- The scout mindset is about venturing out, mapping the territory, and understanding reality as accurately as possible. When faced with new information or differing opinions, a scout remains curious and willing to adjust their “map,” caring more about truly understanding the truth. Scouts exhibit intellectual humility, accepting that they might be wrong and readily adjusting their views based on new evidence. Seeking understanding rather than clinging to preconceptions is the essence of this mindset.
A Shift in Learning Strategies in the AI Era
The introduction of artificial intelligence is transforming how we handle information and approach learning. In the era before AI assistants, many people adopted a linear, “sponge-like” method of learning: reading textbooks or materials from beginning to end, absorbing every bit of information along the way. While this mode of thinking allows for the accumulation of large amounts of information, it tends to be passive, undirected, and often leads to simply accepting whatever is read.
In contrast, critical thinkers advocate for a “gold-mining” approach to learning—sifting through the flow of information to extract valuable insights. This means actively engaging with information by continuously asking questions to filter out the most important insights. Gold-mining thinking requires us to interrogate the material as we read, using questions to pinpoint the best decisions or most reasonable beliefs.
With AI, we can fully leverage this proactive learning strategy. In the past, when learning a new field, one might pore over thick textbooks. Now, we can have AI provide an overview, follow up with questions to clarify unfamiliar concepts, quickly build a comprehensive framework, and then delve deeper into the areas we need.
This approach combines the sponge-like acquisition of background knowledge with the active, gold-mining process of targeted questioning—first rapidly “absorbing” fundamental information through AI, then focusing on breakthroughs through iterative questioning.
For example, when confronting an unfamiliar subject, you might start by asking, “What are the core concepts and branches in this field?” Once you have a panoramic overview, you can continue with in-depth questions such as, “What exactly does this concept mean and how is it applied?” or “What unresolved issues exist in this field?”
This back-and-forth questioning leads to a more efficient and focused understanding than simply reading from cover to cover.
Take decision-making as an example. In the past, we might have relied on personal experience or intuition. Today, we can use AI to list decision criteria and evaluate the pros and cons of various options, enabling more rational choices.
In writing, the traditional method often involved solitary brainstorming and a lengthy process from concept to draft. Now, through dialogue with AI, we can quickly generate outlines, spark ideas, and even have AI serve as a writing coach to refine our work.
The unprecedented interactivity and speed of knowledge acquisition provided by AI shifts us from passive reception to active questioning, ultimately helping us learn and solve problems faster and more effectively.
Prompt Engineering: How to Craft High-Quality Questions
Interacting with AI essentially involves supplying prompts and receiving responses. Prompt engineering refers to the design and optimization of these questions or commands so that the model better understands your intent and produces satisfactory results.
High-quality prompts don’t come by chance—they require strategies and techniques. Below are several key points for constructing premium AI prompts from a thinking mindset, along with some classic template examples.
- Clearly Define Roles and Context
Although AI is powerful, it does not inherently “understand” unstated assumptions. Providing relevant background and role specification helps ensure more accurate responses.
For instance, you might start with: “Assume you are a nutritionist with 10 years of experience…” or “Please explain this from the perspective of a history teacher…” By defining the AI’s identity or tone, you orient it within a particular professional or stylistic framework, resulting in content that better meets your needs. Alternatively, you can briefly set the context before asking a question, such as “I’m writing a popular science article on climate change for high school students,” followed by your specific question.
The clearer the context, the more precise the answer.
- Be Clear and Specific; Avoid Vagueness
A good prompt should directly convey your needs and avoid ambiguous wording.
Avoid vague requests like “Help me write something.” Instead, specify your expectations—for example: “Please write a 300-word introduction explaining the three main applications of AI in healthcare, using plain language and examples.” The more specific the details, the easier it is for the AI to know how to respond.
It’s also best to ask one question at a time. If you pose several unrelated questions at once, the model might not address each in detail. Consider breaking complex queries into multiple, sequential prompts to gradually gather information.
- Indicate the Desired Output Format or Focus
If you have specific requirements for how your answer should be presented, be sure to state them.
If you want a bullet-point list, add “Please use bullet points.” If you expect a step-by-step explanation, request that “the response be broken down into detailed steps.” For comparative analysis requiring tables, you can ask, “Please summarize the information in a table.” Highlighting key points in your prompt can guide the AI to concentrate on what matters most.
- Utilize Open-Ended Question Templates
Open-ended questions often prompt richer and more in-depth answers than simple “Yes/No” responses. The classic 5W1H method (What, Why, Who, Where, When, How) is a good starting point. In learning scenarios, asking “what/why” helps uncover underlying principles; in decision-making scenarios, asking “how” questions can generate potential solutions.
American innovation consultant Warren Berger has promoted the “Why/What if/How” cascade model to stimulate innovative thinking. You can apply this in AI conversations: first ask “Why…” to understand the background or essence of a problem; then ask “What if…” to explore different scenarios; and finally ask “How…” to discuss actionable strategies. For example:
- Why: “Why were phones originally only used for calls?” (The AI can explain historical limitations of phone functions.)
- What if: “What if phones could also access the internet?” (The AI could discuss how internet connectivity might have expanded phone functions and impacts.)
- How: “How can we enable phones to access the internet?” (The AI might propose technical pathways like the development of mobile networks.)
By probing layer by layer, you can get closer to the core issue or understand it from multiple angles—an approach especially valuable for creative and comprehensive analysis.
- Provide Examples or References
Large language models excel at mimicking a given style or pattern. If you have a particular output format in mind, offer a brief example for the AI to emulate.
For example, if you want a news article, present a short excerpt in the desired format and then ask the AI to produce similar content. Or in programming, include an initial code snippet for context. Providing sample output can greatly reduce the chance of the AI going off track.
If the AI’s response isn’t satisfactory, you can counter with further examples or clarifications—such as “This answer is too general; could you provide two concrete examples?”—to gradually guide the AI toward your desired outcome.
- Consider Multi-Turn Interactions and Iterative Refinement
Instead of trying to ask everything at once, treat the interaction with AI as a dialogue, gradually delving deeper into the subject.
After receiving a basic answer from your initial prompt, you can follow up with questions for more details, clarifications, or different perspectives. For example, if the AI presents a viewpoint, you might ask, “What underlying assumptions does this viewpoint have?” or “Are there any opposing perspectives?” to enrich the discussion.
Iterative questioning also helps refine the prompt itself—if the AI misunderstands, reconsider and rephrase your question until the AI accurately captures your intent.
Communicating with AI is a dynamic process where the question itself can be “worked on” in real time. Through flexibility and iteration, you find that the answers increasingly align with your needs.
To summarize these points, here’s a small example of prompt optimization:
-
Basic Prompt: “Write an article about artificial intelligence.”
(The AI might produce a generic article covering history and applications, which may not meet your specific requirements.)
-
Optimized Prompt: “You are a tech columnist. Please write an approximately 1000-word article, in plain language for beginners, that outlines the development history of artificial intelligence and includes three contemporary examples of its application (such as voice assistants and personalized recommendations). The article should have the following structure: introduction, AI history overview, application examples, and future outlook.”
The optimized prompt clearly specifies tone, audience, length, topic breakdown, and structure, greatly reducing the chance for the AI to misunderstand your needs.
Of course, even after receiving a draft, further manual editing and enhancements may be necessary, but you’ve already saved significant time in conceptualizing the initial piece.
Pitfalls to Avoid When Asking AI
Pitfall 1: Vague and Ambiguous Questions Expecting the AI to Guess Your Needs
Many people pose very broad questions, such as “Write an article about X,” which makes it difficult for the AI to know precisely what you require. Instead, specify your purpose, word count, and focus. For instance, if you need an article, describe its intended use, desired length, and the aspects you wish to explore. For factual inquiries, indicate the specific angle you are interested in.
Clear questions are a prerequisite for high-quality answers.
Pitfall 2: Overloading the Query with Multiple Questions Lacking Structure
When I first started interacting with AI, I tried to cram every question into one prompt—asking for an explanation of A, a comparison of B and C, and some advice on D—all at once. The AI would often only address parts of the query or provide incomplete responses.
I learned to break complex questions into smaller, sequential queries, allowing AI to focus on each point in detail and giving me time to reflect on the answers before asking the next question.
If you must cover multiple points in one prompt, organize them into separate sections and emphasize that all parts need to be addressed.
Step-by-step inquiry is more effective than trying to get everything at once.
Pitfall 3: Providing No Context
For example, if you ask about stock market trends by sharing only a candlestick chart without any additional context about the company or its background, the AI may struggle to produce a meaningful analysis. Without sufficient background information, the AI cannot use its answer to generate useful insights. When communicating with AI, imagine explaining the task to a new colleague—provide all the necessary details.
Also, remember that the AI’s context window has limits. Even if you mentioned background details earlier, it’s best to briefly restate key points in each new question.
Pitfall 4: Ignoring the Feedback Loop in AI Communication—Treating It as a One-Off Transaction
Some users ask a question, receive an answer, and then start a new conversation for the next query, ignoring previous context. This means the AI has to relearn your background each time. In reality, the most effective conversations are those that occur over multiple rounds, building upon previous interactions.
Treat your dialogue with AI as a continuous process. Use the accumulated context to follow up, refine answers, or correct misunderstandings. Stick to one conversation thread for a given topic, and use the iterative feedback loop to guide the AI closer to your needs.
Pitfall 5: Limiting AI to Providing Answers, Instead of Allowing It to Ask You Questions
After using AI for a while, we might assume it only functions as a respondent. However, you can instruct the AI to pose questions back to you. Sometimes, our needs aren’t entirely clear to us, so having the AI ask clarifying questions can help uncover hidden aspects of your inquiry. This two-way communication turns the AI into a genuine consultant rather than a one-way command processor.
The Cognitive Division of Labor Between AI and Humans
In human–machine collaboration, it is crucial to determine which tasks can be delegated to AI and which require human oversight, allowing each to play to their strengths.
Harnessing AI is like managing an extraordinarily capable yet uniquely temperamental assistant. To maximize its potential, we must understand what AI excels at and where human judgment remains indispensable.
There is no doubt that AI excels at processing vast amounts of information quickly; when you ask a question, it can instantly retrieve and synthesize relevant content.
Without accessing the internet, AI can deliver detailed answers on common topics based on its training data—essentially serving as an integrated encyclopedia. You can ask it to explain concepts, provide background, or list factual details. When you need to quickly familiarize yourself with a new field or gather preliminary information, AI acts as a superb information courier.
AI is also adept at discerning patterns from chaotic inputs and organizing language according to your requirements. For example, it can summarize articles, extract key points, or transform data into a compelling narrative. This is particularly useful for reading comprehension—you might ask AI to distill complex content into a few digestible points.
Due to its diverse training data, AI is also a powerful tool for brainstorming. It can generate multiple solutions to a problem, offer alternative ideas, or simply spark creativity (for instance, suggesting titles, outlines, or decision-making options). When you feel stuck, throwing the problem to AI may provide fresh, “outside-the-box” perspectives.
For tasks governed by clear rules, AI can execute instructions meticulously, such as translating text, rewriting sentences consistent with a sample provided, or sorting lists. As long as your prompt clearly outlines the guidelines, AI can reliably deliver consistent results.
But what can AI not do? One of its major shortcomings is its tendency for “hallucination”—sometimes fabricating nonexistent citations, facts, or details. Blindly trusting AI’s output without verification may lead to embarrassing mistakes. Therefore, humans must serve as fact-checkers and decision-makers, verifying important information.
Although AI can help analyze data, it is not as effective in making value judgments because many decisions involve personal preferences and subjective factors that data alone cannot resolve. AI can act as an advisor, but the final decision-making authority and responsibility remain with you. In other words, while we use AI to “know both ourselves and the enemy” (to thoroughly understand the information and options), the ultimate choice is ours.
For any project or problem, AI won’t autonomously identify the real question needing an answer—that task belongs to us.
As Chizuko Ueno said, “No one can pose your questions for you.”
We must first determine our goal and understand what problem we are trying to solve. Only then can we direct AI appropriately. Without a clear direction, even AI’s best efforts will result in irrelevant answers. Human insight is crucial for identifying the right questions and distinguishing truly important issues.
Most importantly, creativity—this remains the key quality that differentiates humans from AI. Although AI can mimic various styles and generate content, groundbreaking ideas often emerge from our unique experiences and emotions. In writing, AI can help polish language and provide material, but the soul of the work—its distinct viewpoint and genuine emotion—must come from the author.
Furthermore, AI-generated responses tend to be safe and unremarkable. This is where human critical review is essential, infusing personalized insights or bold hypotheses to give the final product greater depth and originality.
In division of cognitive labor, think of AI as a highly efficient “analyst and synthesizer” and versatile “idea generator,” while humans remain the “question drivers” and rigorous “evaluators.” An effective collaboration typically follows this process: the human sets the goal, poses questions to AI, AI provides preliminary answers or proposals, and the human refines, consolidates, and ultimately decides or creates the final outcome.
Equally important, successful human–AI collaboration requires patience and a teaching mindset. Sometimes the AI’s output may be unsatisfactory—not because it is incapable, but because our questions need adjustment. Treating your interaction with AI as training a new colleague—invest time in refining your instructions and establishing effective communication—will gradually enable it to meet your demands more consistently.
Deconstructing and Evaluating Information
As mentioned earlier, AI can sometimes hallucinate or provide incorrect answers to factual questions.
How do we use critical thinking to assess whether the information provided contains logical errors? In Learn to Ask, Brown and Kelley demonstrate how to deconstruct arguments, clarify meanings, unearth hidden data, and identify pitfalls in reasoning.
Before evaluating any viewpoint, the first task is to understand the framework of its argument. An argument is not merely an opinion—it's a process intended to persuade us to accept a particular claim.
It begins with the thesis, the issue that sparks debate or controversy. Theses are typically either descriptive or prescriptive.
- Descriptive theses focus on what the world is like. They address whether descriptions of the past, present, or future are accurate. For example: “What are the most common causes of domestic violence?” or “Does studying music help improve mathematical ability?” Answers to such questions aim to describe facts or conditions.
- Prescriptive theses focus on what the world should be like. They often involve ethics, values, or norms, discussing what is right or wrong, good or bad, and what ought to be done. For example: “Should public schools teach intelligent design?” or “Should we ban certain behaviors?” Answers to these questions usually include notions of obligation or prohibition. Distinguishing between these two types is important because they require different evaluative criteria and methods.
The conclusion is the core message or main point that the author or speaker wants us to accept. It must be supported by reasons; without such support, the statement is just a personal opinion rather than a conclusion drawn from an argument. When reasons are provided, they form the logical basis that lends credibility to the conclusion.
Argument = Reasons + Conclusion. Only when both elements are clearly presented can we begin a meaningful evaluation of the argument. Every argument has a purpose (usually persuasion) and varies in quality.
Besides understanding the overall logic, we must also examine the meanings of individual words, which can be complex and lead to misunderstandings.
Once we have identified the structure of the argument and clarified key terms, the next step is to scrutinize the quality of the evidence supporting the reasons. Evidence is the information used to substantiate the reliability of claims. A strong argument is built on high-quality supporting evidence.
First, we distinguish between facts and opinions. Facts describe what the world is like and can be verified, although their reliability may vary and be context-dependent. Opinions, however, are personal judgments. While they can be supported by facts, they often incorporate individual biases. Pure opinions have limited value in an argument; we need insights that are based on analyzed evidence and closely reflect the facts.
Evidence comes in various forms—personal experiences, representative cases, and eyewitness testimonies are common examples. Such evidence can evoke emotional responses as it is tied to first-hand accounts, but may lack generalizability if drawn from a small sample, potentially leading to overgeneralizations.
There is also selective bias: people tend to recall and describe experiences that align with their expectations or interests, ignoring contrary information. We can also be swayed by the sincerity, enthusiasm, or fame of the evidence provider, which may cloud our rational judgment.
Some evidence is based on direct observation; while first-hand observations can offer immediate data, they too are filtered through the observer’s values, biases, and expectations, meaning different observers might describe the same event differently.
More authoritative sources include research studies. High-quality scientific reports emphasize reproducibility, control, precision, and clarity. As a result, well-conducted studies are usually reliable sources of evidence.
The Power of Questioning
Question-driven human–machine interaction not only enables us to acquire knowledge and complete tasks more efficiently, but it also deeply cultivates critical thinking and creativity.
By digging beneath the surface during research, we break free from the limitations of textbooks and follow our innate curiosity to extract genuine knowledge. When making decisions, by questioning we avoid the traps of instinct and habit, leading to more rational choices that align with our values. In writing, engaging in dialogue with AI helps us overcome the limits of a single mind, expanding our thinking and refining our expression so that our work remains both broad and deep. When reading and comprehending material, questioning transforms us from passive consumers to active seekers of truth—even when faced with authoritative views, we can discern the subtle details.
At its core, this all hinges on assigning AI the correct role: it is not meant to replace our thinking, but to extend it. We use questioning to harness AI as an accelerator in our cognitive processes; simultaneously, we, with our uniquely human judgment, vet AI’s outputs, merging machine intelligence with human insight to achieve results far superior to what was possible before. In the AI era, the most powerful asset isn’t AI itself, but the person who can use it effectively. And the first requirement for effective use is the ability to ask questions.