Singularity: AI Essay Contest 2026

Your Guide to Win

How to craft a winnning essay

7 Tips and 1 Award-winning Essay with our feedback.

$2,490
Top Prize
5 Prompts
Choose One
1,500
Word Limit
Apr 26
Deadline 2026

What Is the Singularity?

The Singularity: AI Essay Contest is an international competition run by Veritas AI, founded and directed by Harvard alumni. It invites high school students to write argumentative essays on the future of artificial intelligence — its risks, its possibilities, and what it means for the world.

Essays are reviewed by researchers from MIT, Oxford, and other leading institutions. The contest is free to enter, open worldwide, and no prior AI or computer science knowledge is required. Any curious, careful thinker can compete.

Winners receive scholarships of up to $2,490 toward any Veritas AI program. Submissions close April 26, 2026.

How Essays Are Judged

  • Originality A distinct viewpoint and fresh insights that go beyond the obvious.
  • Analysis A thorough understanding of the subject, supported by strong arguments.
  • Evidence Well-substantiated arguments using credible, integrated sources.
  • Structure Logical organization, clear progression of ideas, and smooth transitions.
  • Presentation Meticulously edited, formal, grammatically correct academic writing.

Choose Your Prompt

Select one. Essays must be under 1,500 words, MLA 8th edition format.

Prompt 01
Can an AI system truly "understand" and "discover" scientific theories, or is it only approximating patterns in data? Does the distinction matter?
Is scientific understanding about internally consistent reasoning, or just producing accurate predictions? If an AI gives correct answers without "understanding," should we treat it as a scientific authority?
Jonas Katona
Jonas Katona
PhD Applied Mathematics · Yale University
Prompt 02
If we have AI, how should the world change?
When large changes happen on too short a timescale, there are often large structural problems if the system fails to adapt in time. What preparations — from governance to personal lives — can ease these problems?
Andrey Boris Khesin
Andrey Boris Khesin
PhD Mathematics · MIT
Prompt 03
Does AI make individual intelligence less valuable?
LLMs have given everyone a PhD-level interlocutor in their pocket. How does this interact with the importance of being smart today? Start with Ted Chiang's "Catching Crumbs from the Table."
Henry Cerbone
Henry Cerbone
DPhil Biology · University of Oxford
Prompt 04
How should society balance technological progress with social responsibility to workers?
The transition we are currently under is critical: Humans Need Not Apply. What do we do when most people have become unemployable? Discuss.
Nick Koukoufilippas
Nick Koukoufilippas
PhD Astrophysics · University of Oxford
Prompt 05
In a world where AI can generate art, code, and literature in seconds, what remains uniquely human about the creative process?
Discuss whether "creativity" requires human intent and lived experience, or if the output is all that matters. What value do we provide in an era of automated intelligence?
Alvaro Martinez-Pechero
Alvaro Martinez-Pechero
PhD Engineering Science · University of Oxford

7 Tips to Win

Concrete advice on what separates winning essays from the rest — straight from the judging criteria.

Tip 01

Take a Real Position — Don't Sit on the Fence

The most common mistake in essays like these is hedging. Students present "both sides" without committing to an argument, and the result reads like a summary rather than an essay. Judges are looking for intellectual courage: a clear claim, defended with evidence and logic.

Pick a prompt you have a genuine instinct about — even a tentative one — and build from there. Ask yourself: what do I actually believe, and why? A strong, arguable thesis like "AI does not diminish individual intelligence — it raises the floor while leaving the ceiling untouched" is far more compelling than "AI has both benefits and drawbacks for individual intelligence."

You Should
  • State your thesis clearly in the first paragraph — do not make the reader hunt for your position
  • Make sure every paragraph is doing work to support that thesis
  • Return to your thesis in the conclusion with something added — not just a restatement, but a sharpened version of your original claim
Tip 02

Engage Seriously with Counterarguments

A strong essay does not ignore the other side — it addresses it and explains why the opposing view falls short. This is what separates undergraduate-level thinking from high school writing, and it is exactly what these judges are trained to look for.

For each major counterargument, do three things: state it fairly (do not build a straw man), grant what is reasonable about it, then explain why it does not undermine your thesis.

Example: If you are arguing that AI makes individual intelligence less valuable, you will need to address the counterargument that intelligence still matters for generating the questions AI answers — and explain why that concession does not save the opposing view.

Tip 03

Use the Judge's Commentary as a Map

Each prompt comes with a judge's commentary, and most students ignore it. Do not. The commentary tells you exactly what intellectual territory the judge finds interesting and what debates they expect you to engage with.

For Prompt 3, the Ted Chiang reference is not decorative — it is an invitation. If you can engage with Chiang's argument that unequal access to cognitive tools has historically shaped society, and bring something new to it, you will immediately stand out from essays that treat the prompt in the abstract. A judge who contributed a prompt and named a specific starting point will notice when an essay actually uses it.

Tip 04

Bring Specific Evidence — Not Just General Claims

Vague claims lose points. "AI is changing the workforce" is a claim. "The IMF's 2024 report estimated that 40% of global employment is exposed to AI disruption, rising to 60% in advanced economies" is evidence. The second version tells the reader you have actually done the research.

Where to find credible sources for free:

  • Google Scholar — search for academic papers on any of the three prompt topics
  • JSTOR — many articles are freely accessible with a free account
  • Reports from the IMF, OECD, World Bank, and McKinsey Global Institute — all publicly available and frequently cited in AI policy discussions
  • Reputable journalismThe Economist, MIT Technology Review, The Atlantic are appropriate to cite for current developments, used alongside academic sources
  • Primary sources — if you are engaging with a philosopher (Kant, Rawls, Nozick are all relevant to Prompt 1), cite the original text, not a summary

Avoid: Wikipedia as a source (fine to use for orientation, not citation), personal blogs, unverified websites, or social media.

Tip 05

Choose Your Prompt Based on Where You Have Something to Say

All five prompts are strong, but they reward different kinds of writers:

Prompt 01
Can an AI system truly "understand" and "discover" scientific theories, or is it only approximating patterns in data? Does the distinction matter?
Rewards students drawn to philosophy of mind and epistemology. If you find yourself asking what "understanding" even means — for humans or machines — this is your prompt. The best essays take a clear position on whether the distinction between pattern-matching and genuine understanding is meaningful, and why it matters for how we use AI in science.
Prompt 02
If we have AI, how should the world change?
Rewards students who think in systems. The key word is "should" — this is a normative question, not a descriptive one. The best essays make a specific, arguable claim about governance, labor policy, or individual adaptation, rather than cataloguing everything that might change.
Prompt 03
Does AI make individual intelligence less valuable?
Rewards students drawn to philosophy and ideas. If you have thought about meritocracy, cognitive inequality, or what it means to be "smart," this is your prompt. The Ted Chiang story is worth reading before you decide — it may change what you want to say.
Prompt 04
How should society balance technological progress with social responsibility to workers?
Rewards students comfortable with economics, history, and policy. If you can bring evidence from past technological transitions — the Industrial Revolution, the mechanization of agriculture — and use that to argue something specific about what should happen now, this is your prompt.
Prompt 05
In a world where AI can generate art, code, and literature in seconds, what remains uniquely human about the creative process?
Rewards students who care deeply about art, authorship, or identity. The question is not just what AI can produce — it is what creativity is. The strongest essays will stake out a clear position on whether intent, experience, or process is what makes something genuinely creative, and defend it against the obvious counterarguments.
Tip 06

Structure Your Essay to Do Work, Not Just Organize

A strong structure is not just introduction → body → conclusion. Each paragraph should advance the argument, not merely continue it. A useful test: if you could remove a paragraph without weakening your thesis, it probably should not be there.

A reliable structure for these prompts:

  • Introduction — hook (a specific case, statistic, or question), context (why this matters now), and a clear thesis
  • Body paragraph 1 — your strongest supporting argument, with evidence
  • Body paragraph 2 — your second supporting argument, or a deeper development of the first
  • Body paragraph 3 — the strongest counterargument, addressed and rebutted
  • Conclusion — what your argument implies: what should follow from it, or what remains unresolved

Keep paragraphs focused. One idea per paragraph, fully developed.

Tip 07

Write Formally, Edit Ruthlessly

Academic writing is not about using long words — it is about being precise. Every sentence should mean exactly what you intend it to mean, no more and no less.

Practical editing checklist before you submit:

Read your essay aloud. Sentences that are hard to say are usually hard to read.
Cut any sentence that could be removed without losing meaning.
Replace vague words ("things," "a lot," "very," "important") with specific ones.
Do not use contractions ("don't" → "do not").
Avoid first-person where possible — keep the focus on the argument.
Check every citation is formatted in MLA 8th edition.
Confirm your essay does not exceed the word limit, excluding bibliography.

A Sample Essay We Admire

This essay demonstrates many of the qualities we look for: a clear original thesis, serious engagement with counterarguments, specific evidence, and precise academic writing.

Swasti Sahoo
Adamstown Community College, Dublin, Ireland
Sample Essay
"Chatbots as Epistemic First Responders and the Collapse of the Social Repair Loop"
Why this essay works
  • Clear thesis in the opening: The argument is stated in the third paragraph and never wavers.
  • Original framing: The "epistemic first responder" and "social repair loop" concepts give the essay its own vocabulary.
  • Specific evidence: Pew Research data, Transformer architecture citations, Durkheim, Zuboff - all integrated, not decorative.
  • Counterargument addressed: The section on "Rebuilding the Loop" grants the opposing view its due before rebutting it.
  • Strong conclusion: The final line - "ensure that the first responder does not become the final voice" - sharpens the thesis rather than restating it.
Prompt (previous cycle)
What kinds of social problems are chatbots good at solving, and what kinds of social problems are they bad at solving?

In a time when artificial agents can generate articulate and assured answers about whatever subject area you desire in a matter of seconds, what does it mean to know something? If knowledge has historically been forged amidst conversation, argument and reflection, then chatbots also carry with them an epistemic contradiction: a faster answer than the speed that humans will be able to affirm those answers, but with no emotional or communal experience that generates understanding; a form of knowledge that, once rooted in social judgment, risks being understood as an instantaneous commodity that could be accepted, processed privately and hardly questioned.

According to a Pew Research Center study published in early 2024, roughly 23% of Americans had used a chatbot for emotional or informational support at least once. What began as a simple convenience, such as to condense an article or rewrite a sentence, has somewhat transcended into a larger development: a technology that listens, comforts and clarifies when a human can neither listen, nor comfort, nor clarify. Chatbots have become humanity's newest first responders, not to emergencies of the body but to crises of understanding. Yet in offering instantaneous answers and synthetic empathy, they may be mending one fracture in the social fabric while deepening another.

This essay argues that chatbots excel as "epistemic first responders," tools that mitigate informational inequality and provide scalable cognitive support in moments of uncertainty. However, they are poor at solving social problems that depend on sustained, pluralistic deliberation, because they replace, rather than restore, the collective "social repair loop" through which societies rebuild shared meaning. While their algorithmic structure allows them to absorb and redistribute vast knowledge, that same structure weakens the intersubjective dynamics essential to trust, empathy and moral growth.

Chatbots as Epistemic First Responders

At their core, large language models (LLMs) such as ChatGPT are probabilistic text generating systems trained on enormous corpora of human language. They predict the next word in a sequence by modeling patterns of association across billions of parameters (Bender et al. 610). Though they do not "understand" in the human sense, they mimic the surface coherence of human conversation with remarkable fluency.

This computational architecture allows chatbots to act as "epistemic prostheses" (Floridi 13), extensions of human reasoning that can clarify, summarize and scaffold complex knowledge in real time. They lower the cognitive cost of accessing information, thereby narrowing the gap between experts and lay users. During the early COVID-19 pandemic, for example, public health chatbots deployed by the World Health Organization handled millions of inquiries daily, effectively democratizing access to verified information when misinformation surged ("WHO Health Alert").

In this way, chatbots directly address epistemic inequality, the uneven distribution of knowledge that drives many social divides. By offering free, personalized explanations, they empower individuals historically excluded from expert networks. This capacity represents an unprecedented social good: a mechanism for distributing cognitive resources at scale, akin to a universal library that speaks back.

The Mechanics Behind the Aid

Technically, the strength of chatbots lies in their generalization capacity, their ability to synthesize patterns across disparate data. Transformer architectures, introduced by Vaswani et al. in 2017 allow attention mechanisms to weigh context dynamically, producing contextually relevant output even with limited input. This design enables adaptive reasoning, making LLMs particularly effective in crisis, communication contexts, tutoring and mental health triage systems.

For instance, studies in digital mental health have shown that users disclose more openly to AI agents than to human therapists, citing lower fear of judgment (Lucas et al. 288). This suggests chatbots can temporarily fill emotional or informational gaps where human capacity falls short, precisely the role of an "epistemic first responder."

Yet these same architectures produce epistemic hazards. Because the models are trained on statistical regularities, they tend to reproduce prevailing linguistic and cultural biases (Mehrabi et al. 115). Their responses mirror consensus patterns, not moral reasoning. Thus, while they can stabilize information flows, they cannot mediate moral or cultural conflict.

The Collapse of the Social Repair Loop

Sociologist Emile Durkheim argued that social cohesion depends on collective rituals of meaning-making: public debate, shared emotion and mutual recognition. In modern networked societies, these rituals form a social repair loop, the process through which communities deliberate, disagree and reconcile after epistemic disruption.

Chatbots risk collapsing this loop by creating the illusion of understanding without the labor of mutual engagement. When an individual consults a chatbot rather than another human, they receive an answer that satisfies personal curiosity but bypasses communal negotiation. Over time, the locus of sense making shifts from shared deliberation to individualized consultation, a quiet privatization of knowledge.

This dynamic echoes Shoshana Zuboff's concept of "instrumentarian power," in which behavioral data systems guide users through subtle nudges rather than overt coercion (Zuboff 356). In epistemic terms, chatbots do not coerce belief; they pre-organize curiosity. The result is not ignorance but isolation: each person inhabits a personalized informational microclimate, diminishing the friction that once forged collective understanding.

The Social Limits of Algorithmic Empathy

Another boundary lies in the simulation of empathy. Conversational artificial intelligence is capable of expressing care and concern through verbal expressions associated with care such as "I understand," or "that sounds difficult," but they lack the phenomenological basis of empathetic connection, the ability to truly feel with someone else. Human-computer interaction studies suggest that users often anthropomorphize chatbots, projecting emotional depth that does not exist (Nass and Moon 83). While this may provide temporary consolation, it simultaneously eats away at the ability to cumulatively model and expect reciprocation from other people as expression and comfort.

When social support is offloaded to machines, human communities risk atrophy in their empathic reflexes. As philosopher Hubert Dreyfus warned decades ago, overreliance on artificial mediation can "deskill" embodied forms of care (Dreyfus 45). Therefore, while a chatbot may assuage the burdens of loneliness in the short term, they may undermine the social capacity of reciprocal comfort, the very mechanism in which private distress moves into public collective solidarity.

Rebuilding the Loop: Toward Hybrid Epistemics

To prevent this collapse, societies must reintegrate chatbots into the social repair loop rather than letting them replace it. This requires transparency about their epistemic limits and deliberate design that promotes reflection rather than mere efficiency.

Emerging initiatives provide hopeful models. Educational researchers at Stanford, for instance, have developed "explainable AI tutors" that reveal their reasoning pathways, prompting users to critique the model's logic (Williams et al. 72). Similarly, civic-dialogue projects such as Pol.is, which use AI to map consensus patterns without scripting conclusions, show how automation can augment, not replace, collective deliberation ("Pol.is Open-Source").

In these designs, chatbots serve as scaffolds for metacognitive awareness, classroom tools that teach users how to think with, and against, algorithmic knowledge. They act as "first responders" at moments of epistemic crisis, but send society back to itself to recover long-term.

Conclusion

Chatbots are especially prepared to alleviate the symptoms of social issues related to an absence of information: confusion, misinformation, and inequitable access to knowledge. They struggle if challenged to address the causes: the collapse of trust, empathy and collective reasoning that lie at the heart of social disintegration.

The future of this technology will rely on the recognition that it has two sides: both a cure and a curse, both a bandage and an infection. To harness its promise, we must design not for replacement but for repair, to ensure that the first responder does not become the final voice.

Ready to Make Your Mark?

The Singularity prompts are hard on purpose. Join students from 160+ countries competing for scholarships to the world's leading AI education programs. Submissions close April 26, 2026.

Questions? Contact us at director@veritasai.com