Your Guide to Win
How to craft a winnning essay
7 Tips and 1 Award-winning Essay with our feedback.
What Is the Singularity?
The Singularity: AI Essay Contest is an international competition run by Veritas AI, founded and directed by Harvard alumni. It invites high school students to write argumentative essays on the future of artificial intelligence — its risks, its possibilities, and what it means for the world.
Essays are reviewed by researchers from MIT, Oxford, and other leading institutions. The contest is free to enter, open worldwide, and no prior AI or computer science knowledge is required. Any curious, careful thinker can compete.
Winners receive scholarships of up to $2,490 toward any Veritas AI program. Submissions close April 26, 2026.
How Essays Are Judged
-
Originality A distinct viewpoint and fresh insights that go beyond the obvious.
-
Analysis A thorough understanding of the subject, supported by strong arguments.
-
Evidence Well-substantiated arguments using credible, integrated sources.
-
Structure Logical organization, clear progression of ideas, and smooth transitions.
-
Presentation Meticulously edited, formal, grammatically correct academic writing.
Choose Your Prompt
Select one. Essays must be under 1,500 words, MLA 8th edition format.
7 Tips to Win
Concrete advice on what separates winning essays from the rest — straight from the judging criteria.
Take a Real Position — Don't Sit on the Fence
The most common mistake in essays like these is hedging. Students present "both sides" without committing to an argument, and the result reads like a summary rather than an essay. Judges are looking for intellectual courage: a clear claim, defended with evidence and logic.
Pick a prompt you have a genuine instinct about — even a tentative one — and build from there. Ask yourself: what do I actually believe, and why? A strong, arguable thesis like "AI does not diminish individual intelligence — it raises the floor while leaving the ceiling untouched" is far more compelling than "AI has both benefits and drawbacks for individual intelligence."
- State your thesis clearly in the first paragraph — do not make the reader hunt for your position
- Make sure every paragraph is doing work to support that thesis
- Return to your thesis in the conclusion with something added — not just a restatement, but a sharpened version of your original claim
Engage Seriously with Counterarguments
A strong essay does not ignore the other side — it addresses it and explains why the opposing view falls short. This is what separates undergraduate-level thinking from high school writing, and it is exactly what these judges are trained to look for.
For each major counterargument, do three things: state it fairly (do not build a straw man), grant what is reasonable about it, then explain why it does not undermine your thesis.
Example: If you are arguing that AI makes individual intelligence less valuable, you will need to address the counterargument that intelligence still matters for generating the questions AI answers — and explain why that concession does not save the opposing view.
Use the Judge's Commentary as a Map
Each prompt comes with a judge's commentary, and most students ignore it. Do not. The commentary tells you exactly what intellectual territory the judge finds interesting and what debates they expect you to engage with.
For Prompt 3, the Ted Chiang reference is not decorative — it is an invitation. If you can engage with Chiang's argument that unequal access to cognitive tools has historically shaped society, and bring something new to it, you will immediately stand out from essays that treat the prompt in the abstract. A judge who contributed a prompt and named a specific starting point will notice when an essay actually uses it.
Bring Specific Evidence — Not Just General Claims
Vague claims lose points. "AI is changing the workforce" is a claim. "The IMF's 2024 report estimated that 40% of global employment is exposed to AI disruption, rising to 60% in advanced economies" is evidence. The second version tells the reader you have actually done the research.
Where to find credible sources for free:
- Google Scholar — search for academic papers on any of the three prompt topics
- JSTOR — many articles are freely accessible with a free account
- Reports from the IMF, OECD, World Bank, and McKinsey Global Institute — all publicly available and frequently cited in AI policy discussions
- Reputable journalism — The Economist, MIT Technology Review, The Atlantic are appropriate to cite for current developments, used alongside academic sources
- Primary sources — if you are engaging with a philosopher (Kant, Rawls, Nozick are all relevant to Prompt 1), cite the original text, not a summary
Avoid: Wikipedia as a source (fine to use for orientation, not citation), personal blogs, unverified websites, or social media.
Choose Your Prompt Based on Where You Have Something to Say
All five prompts are strong, but they reward different kinds of writers:
Structure Your Essay to Do Work, Not Just Organize
A strong structure is not just introduction → body → conclusion. Each paragraph should advance the argument, not merely continue it. A useful test: if you could remove a paragraph without weakening your thesis, it probably should not be there.
A reliable structure for these prompts:
- Introduction — hook (a specific case, statistic, or question), context (why this matters now), and a clear thesis
- Body paragraph 1 — your strongest supporting argument, with evidence
- Body paragraph 2 — your second supporting argument, or a deeper development of the first
- Body paragraph 3 — the strongest counterargument, addressed and rebutted
- Conclusion — what your argument implies: what should follow from it, or what remains unresolved
Keep paragraphs focused. One idea per paragraph, fully developed.
Write Formally, Edit Ruthlessly
Academic writing is not about using long words — it is about being precise. Every sentence should mean exactly what you intend it to mean, no more and no less.
Practical editing checklist before you submit:
A Sample Essay We Admire
This essay demonstrates many of the qualities we look for: a clear original thesis, serious engagement with counterarguments, specific evidence, and precise academic writing.
In a time when artificial agents can generate articulate and assured answers about whatever subject area you desire in a matter of seconds, what does it mean to know something? If knowledge has historically been forged amidst conversation, argument and reflection, then chatbots also carry with them an epistemic contradiction: a faster answer than the speed that humans will be able to affirm those answers, but with no emotional or communal experience that generates understanding; a form of knowledge that, once rooted in social judgment, risks being understood as an instantaneous commodity that could be accepted, processed privately and hardly questioned.
According to a Pew Research Center study published in early 2024, roughly 23% of Americans had used a chatbot for emotional or informational support at least once. What began as a simple convenience, such as to condense an article or rewrite a sentence, has somewhat transcended into a larger development: a technology that listens, comforts and clarifies when a human can neither listen, nor comfort, nor clarify. Chatbots have become humanity's newest first responders, not to emergencies of the body but to crises of understanding. Yet in offering instantaneous answers and synthetic empathy, they may be mending one fracture in the social fabric while deepening another.
This essay argues that chatbots excel as "epistemic first responders," tools that mitigate informational inequality and provide scalable cognitive support in moments of uncertainty. However, they are poor at solving social problems that depend on sustained, pluralistic deliberation, because they replace, rather than restore, the collective "social repair loop" through which societies rebuild shared meaning. While their algorithmic structure allows them to absorb and redistribute vast knowledge, that same structure weakens the intersubjective dynamics essential to trust, empathy and moral growth.
Chatbots as Epistemic First Responders
At their core, large language models (LLMs) such as ChatGPT are probabilistic text generating systems trained on enormous corpora of human language. They predict the next word in a sequence by modeling patterns of association across billions of parameters (Bender et al. 610). Though they do not "understand" in the human sense, they mimic the surface coherence of human conversation with remarkable fluency.
This computational architecture allows chatbots to act as "epistemic prostheses" (Floridi 13), extensions of human reasoning that can clarify, summarize and scaffold complex knowledge in real time. They lower the cognitive cost of accessing information, thereby narrowing the gap between experts and lay users. During the early COVID-19 pandemic, for example, public health chatbots deployed by the World Health Organization handled millions of inquiries daily, effectively democratizing access to verified information when misinformation surged ("WHO Health Alert").
In this way, chatbots directly address epistemic inequality, the uneven distribution of knowledge that drives many social divides. By offering free, personalized explanations, they empower individuals historically excluded from expert networks. This capacity represents an unprecedented social good: a mechanism for distributing cognitive resources at scale, akin to a universal library that speaks back.
The Mechanics Behind the Aid
Technically, the strength of chatbots lies in their generalization capacity, their ability to synthesize patterns across disparate data. Transformer architectures, introduced by Vaswani et al. in 2017 allow attention mechanisms to weigh context dynamically, producing contextually relevant output even with limited input. This design enables adaptive reasoning, making LLMs particularly effective in crisis, communication contexts, tutoring and mental health triage systems.
For instance, studies in digital mental health have shown that users disclose more openly to AI agents than to human therapists, citing lower fear of judgment (Lucas et al. 288). This suggests chatbots can temporarily fill emotional or informational gaps where human capacity falls short, precisely the role of an "epistemic first responder."
Yet these same architectures produce epistemic hazards. Because the models are trained on statistical regularities, they tend to reproduce prevailing linguistic and cultural biases (Mehrabi et al. 115). Their responses mirror consensus patterns, not moral reasoning. Thus, while they can stabilize information flows, they cannot mediate moral or cultural conflict.
The Collapse of the Social Repair Loop
Sociologist Emile Durkheim argued that social cohesion depends on collective rituals of meaning-making: public debate, shared emotion and mutual recognition. In modern networked societies, these rituals form a social repair loop, the process through which communities deliberate, disagree and reconcile after epistemic disruption.
Chatbots risk collapsing this loop by creating the illusion of understanding without the labor of mutual engagement. When an individual consults a chatbot rather than another human, they receive an answer that satisfies personal curiosity but bypasses communal negotiation. Over time, the locus of sense making shifts from shared deliberation to individualized consultation, a quiet privatization of knowledge.
This dynamic echoes Shoshana Zuboff's concept of "instrumentarian power," in which behavioral data systems guide users through subtle nudges rather than overt coercion (Zuboff 356). In epistemic terms, chatbots do not coerce belief; they pre-organize curiosity. The result is not ignorance but isolation: each person inhabits a personalized informational microclimate, diminishing the friction that once forged collective understanding.
The Social Limits of Algorithmic Empathy
Another boundary lies in the simulation of empathy. Conversational artificial intelligence is capable of expressing care and concern through verbal expressions associated with care such as "I understand," or "that sounds difficult," but they lack the phenomenological basis of empathetic connection, the ability to truly feel with someone else. Human-computer interaction studies suggest that users often anthropomorphize chatbots, projecting emotional depth that does not exist (Nass and Moon 83). While this may provide temporary consolation, it simultaneously eats away at the ability to cumulatively model and expect reciprocation from other people as expression and comfort.
When social support is offloaded to machines, human communities risk atrophy in their empathic reflexes. As philosopher Hubert Dreyfus warned decades ago, overreliance on artificial mediation can "deskill" embodied forms of care (Dreyfus 45). Therefore, while a chatbot may assuage the burdens of loneliness in the short term, they may undermine the social capacity of reciprocal comfort, the very mechanism in which private distress moves into public collective solidarity.
Rebuilding the Loop: Toward Hybrid Epistemics
To prevent this collapse, societies must reintegrate chatbots into the social repair loop rather than letting them replace it. This requires transparency about their epistemic limits and deliberate design that promotes reflection rather than mere efficiency.
Emerging initiatives provide hopeful models. Educational researchers at Stanford, for instance, have developed "explainable AI tutors" that reveal their reasoning pathways, prompting users to critique the model's logic (Williams et al. 72). Similarly, civic-dialogue projects such as Pol.is, which use AI to map consensus patterns without scripting conclusions, show how automation can augment, not replace, collective deliberation ("Pol.is Open-Source").
In these designs, chatbots serve as scaffolds for metacognitive awareness, classroom tools that teach users how to think with, and against, algorithmic knowledge. They act as "first responders" at moments of epistemic crisis, but send society back to itself to recover long-term.
Conclusion
Chatbots are especially prepared to alleviate the symptoms of social issues related to an absence of information: confusion, misinformation, and inequitable access to knowledge. They struggle if challenged to address the causes: the collapse of trust, empathy and collective reasoning that lie at the heart of social disintegration.
The future of this technology will rely on the recognition that it has two sides: both a cure and a curse, both a bandage and an infection. To harness its promise, we must design not for replacement but for repair, to ensure that the first responder does not become the final voice.
Ready to Make Your Mark?
The Singularity prompts are hard on purpose. Join students from 160+ countries competing for scholarships to the world's leading AI education programs. Submissions close April 26, 2026.
Questions? Contact us at director@veritasai.com