Don’t show me your AI. It is rude!
Marek Tuszynski, Executive Director and co-founder of Tactical Tech. Berlin, 2026
For a while now, I have been running workshops to encourage participants to think critically about the hype surrounding generative AI. Unlike two or three years ago, one of the main issues I now face is that everyone in the room uses AI, whether they want to or not. This shift has created some unusual dynamics. For example, during a pre-session assessment designed to evaluate attendees’ knowledge of AI so that I can adapt my materials accordingly, I found that many people claimed to have an advanced understanding of generative AI. They often attributed their confidence to having used ChatGPT for over a year and having developed basic prompting strategies, equating this with broader expertise and an understanding of the technology, businesses and politics behind it.
Another significant change that I have noticed is that almost everyone in the room has been infected by the “Eliza effect”. This is not just my limited observation from statistically insignificant number of workshops I was able to run in last few years, but also supported by Harvard Business Review observations, the visualisation “Ranked: All the Things People Use AI for in 2025” and the Washington Post analysis of leaked data from thousands of conversations between users and their bots - “We analyzed 47,000 ChatGPT conversations. Here’s what people really use it for”.
No matter how I try to get the room to engage with issues such as rights, personal data, the military context of generative AI, environmental impact and labour abuse, participants will always contrast these issues with their personal and intimate dependency on their favourite bot (or deference to generative AI in general). Consequently, the Eliza effect undermines the prospect of critical engagement with generative AI. I am still working on various strategies to address this uncomfortable situation when working directly with people. Are there any arguments, practices or exercises that could temporarily reduce the Eliza effect until a “vaccine” can be found?

Detail of our Synthetic Trust poster from the Hello AI exhibit 2025 – you can host the exhibition just contact us here - https://tacticaltech.org/artificial-intelligence-and-us/
The above image summarises the synthetic relationships that we form with generative AI. We are all familiar with using generative AI to assist with tasks such as translation, transcription, coding and editing. We are also familiar with using these tools as companions, such as friends, advisors or confidants. Lastly, many of us have experimented with using them to explore ideas, issues and concepts that fall somewhere between the two.
It is very difficult to have a rational conversation with people who genuinely developed synthetic intimacy with the tools you are about to critique.
One effective strategy is to acknowledge that admitting to dependency or borderline addiction can be difficult for anyone. So let it be. This takes time and involves many steps. Rather than shaming or lamenting, it is better to share examples of approaches, projects and attitudes that address the challenges posed by Gen AI through provocation, creativity and, sometimes, simple explanations of how to work around them.
Below, I will share some examples that I like to use in my workshops. I am sure there are many more. I am not trying to promote or endorse anything here. Sometimes I do, but here I just want to share what inspires me, and what has worked with students and many different groups — from journalists and creatives to people who decide on funding. I hope you, the reader, will send me your examples and ideas in exchange. Lets share nicely. Of course, I'm sure I'm missing a lot, or even misrepresenting some things too.
I have organised the examples into several categories. There is no need to read everything; each category provides a brief explanation of its contents and sometimes suggests further reading. The order of the categories is completely random, the order within each category is also - random. The recommended readings here and there may seem to contradict what is shown later, but that is the point – not everything is black and white. To hell with binary!
So let me recommend a book to you right now: The Reverse Contradictionary, by Vuk Cosic, Vladan Joler and IOCOSE. It's both a book and an online project . It's a great example of lateral thinking and provocation, helping us to reimagine things we might be taking for granted. Enjoy! (In other words, I accept no responsibility if any of the listed resources lead you down a rabbit hole.)

AI infrastructure and influence as seen from different angles
First, we will examine projects that attempt to take a broader view, helping us to navigate the various aspects of AI as an infrastructure, including its politics, relationships, and influence.
Recommended reading: Conspiratorial Design: Information Design for the Bigger Picture by Carlo Bramanti. I am not suggesting that any of the examples below are conspiratorial; I am merely suggesting that, whenever we engage with information design (we at Tactical Tech do) and propose different network mappings, we can easily be perceived as conspiratorial. This is not the case with the examples below.
AI War Cloud
Link: https://aiwar.cloud/
“AI decision making from battlefield to desktop”

Snapshot of the visualisation taken in February 2026, https://aiwar.cloud/
AI War Cloud is a research project and interactive installation that highlights the dangerous overlap between everyday AI technologies and those employed in modern warfare. The same machine learning tools that power consumer apps, such as recommendation systems and automated agents, are also used in military 'AI Decision Support Systems' (AI-DSS). These systems process vast amounts of data to make life-or-death decisions at unprecedented speeds.
The project reveals that the datasets, models and algorithms behind these military tools are remarkably similar to those used by civilians. It traces the development of these technologies, which were initially tested on vulnerable populations in conflict zones, before being used against the citizens of the countries that developed them. By mapping these connections, AI War Cloud raises urgent questions about accountability, responsibility, and the human cost of AI on the battlefield and at home. This research is even more relevant today than it was last year, particularly in light of the ongoing dispute between the US government and Anthropic.
Project by Sarah Ciston, https://sarahciston.com/, 2025
The Authoritarian Stack
“How Tech Billionaires Are Building a Post-Democratic America — And Why Europe Is Next”

Snapshot of the project taken in February 2026, https://www.authoritarian-stack.info/
This project visualises the 'Authoritarian Stack', which is a network of companies, investment funds, and political figures that are privatising essential government functions. Based on an open-source dataset, the map documents over 250 actors, thousands of verified connections, and financial transactions totalling $45 billion.
It consolidates and categorises data from public sources, revealing the relationships between key figures and institutions.
Project by by Prof. Francesca Bria https://www.francescabria.com/ with xof-research.org, 2025
Anatomy of AI
Link: https://anatomyof.ai/
“An anatomical case study of the Amazon echo as a artificial intelligence system made of human labor Transportation.”

Fragment of the visualisation taken in February 2026, https://anatomyof.ai/img/ai-anatomy-map.pdf
“Anatomy of an AI System” is a visual map and essay that explores the hidden costs of an Amazon Echo. Through an exploded-view diagram, it reveals the three core extractive processes behind large-scale AI systems, namely the material resources, human labour and data required to build and operate the device. The project traces these elements throughout the entire life cycle of a single Echo, providing a visual breakdown of what is truly required to power AI technology.
Project by Kate Crawford https://katecrawford.net/ and Vladan Joler https://labs.rs/en/vladan-joler/, 2018.
Calculating Empires
“A Genealogy of Technology and Power Since 1500.”

Fragment of the visualisation taken in February 2026, https://calculatingempires.net/
“Calculating Empires: A Genealogy of Technology and Power Since 1500” is a large-scale research visualisation tracing the co-evolution of technology and systems of power over five centuries. It reveals how historical patterns of colonialism, militarisation, automation and enclosure continue to shape the present day. The project is organised around four themes — communication, computation, classification and control — and maps the development of devices, infrastructures and social institutions, from early navigational tools and the Gutenberg press to today's AI, data brokers and hyperscale computing. By visualising the interconnected histories of empire, resource extraction and social control, the project exposes how past practices of classification, surveillance and environmental domination persist in modern technological and political systems. The project invites users to explore these complex histories, draw their own connections, and confront the legacies of empire in order to imagine alternative futures.
Project by Kate Crawford and Vladan Joler, 2023 (it took them 5 years to create it!)
Cartography of Generative AI
“The popularisation of artificial intelligence (AI) has given rise to imaginaries that invite alienation and mystification. At a time when these technologies seem to be consolidating, it is pertinent to map their connections with human activities and more than human territories. What set of extractions, agencies and resources allow us to converse online with a text-generating tool or to obtain images in a matter of seconds?"

Image of the visualisation taken in February 2026, https://cartography-of-generative-ai.net/
Estampa's “Cartography of Generative AI”, a project by the Barcelona-based collective, unveils the hidden infrastructure powering generative AI and exposes its cultural, material and political underpinnings. The initiative traces the supply chain of AI, from the extraction of data without consent (a form of digital extractivism) to the monopolisation of the technology by a few US tech giants such as OpenAI, Google and Meta. It also sheds light on the precarious labour of micro-workers, primarily in the Global South, who carry out vital yet poorly paid tasks such as data labelling and content moderation. The environmental cost is significant: training large models uses energy and water equivalent to that used by tens of thousands of households. From an ethical standpoint, the project raises concerns about bias, copyright violations, hallucinations and the dissemination of synthetic misinformation.
Project by Estampa, https://tallerestampa.com/en/, 2024
A Guide to the Circular Deals Underpinning the AI Boom
“A web of interlinked investments raises the risk of cascading losses if AI falls short of its potential"

Snapshot of the project captured in March 2026, https://www.bloomberg.com/graphics/2026-ai-circular-deals/
Bloomberg turned a widely shared simple graphic from November 2025 into an interactive project – to which I am referring here, it went online in January 2026. The AI boom has been fuelled by a complex web of 'circular deals' between tech giants, chipmakers, and AI start-ups. Under this system, companies invest in each other while also becoming major customers. This interdependence began with Microsoft's landmark investment of $13 billion in OpenAI. This ensured that the start-up had the computing power it needed and also made OpenAI a key client of Microsoft's cloud services. The trend quickly spread: Amazon and Google invested billions in Anthropic; Nvidia invested in start-ups such as Mistral and xAI; and AI labs committed to making large-scale purchases of chips and cloud services from their backers. While these deals can accelerate growth by securing supply and funding, they also create significant risks. If demand for AI falls short of expectations, companies could face unsustainable costs and plummeting valuations, potentially triggering losses across the industry. Critics warn that such arrangements may distort incentives, leading to poor decisions and over investment in unproven technology, reminiscent of the speculative excesses of the late-1990s telecoms boom. Supporters, however, argue that the high cost and scarcity of advanced AI infrastructure make these partnerships essential for driving innovation and meeting surging demand.
Bloomberg visualisation illustrates best the following quote from the fediverse: “... the reason RAM has quadrupled in price is that a huge quantity of RAM that hasn't been produced yet has been bought with money that doesn't exist to populate GPUs that also haven't been produced to go in datacenters that haven't been built powered by infrastructure that may never exist to meet a demand that doesn't exist at all to make profit margins that mathematically can't exist while economists talk about this thing they call the "rational markets hypothesis” (@mhoye@cosocial.ca)
Project by Cedric Sam, Rachael Dottle, Agnee Ghosh, Kyle Kim for Bloomberg News Published: January 22, 2026
Media Capture Watch
“An interactive map revealing the funding relationships between Big Tech, AI companies, and journalism — exposing the emerging architecture of media capture”

Snapshot of the project taken in March 2026 - Media Capture Watch (2026). Mapping Big Tech's Influence on Journalism. https://github.com/nananwachukwu/media-capture-watch
Media Capture Watch is an open-source tool that you can access, explore or host. You can also run it on your own device, and also add data to it. The project monitors 13 companies: Google, OpenAI, Microsoft, TikTok/ByteDance, Nvidia, X/Twitter, Meta, Perplexity, Anthropic, ProRata.AI, Amazon, Mistral, IBM, and a number of media outlets. It currently tracks over 65 deals worth over 1 billion USD between AI companies and media outfits. The data set can be explored in different ways: network view by company, flow view, timeline, etc. The site does not track you and has no cookies.
Network maps have their limitations, the most obvious of which is causality. The fact that entity 'x' has a deal with entity 'y' does not clearly determine the nature of this deal. Nevertheless, network maps and the tracking of deals and connections are crucial for identifying trends, researching actors and understanding changes in their relationships. This is particularly important in the context of AI.
Our organisation has experienced a significant increase in demand from newsrooms and independent media outlets seeking our assistance in adopting AI tools. We find it problematic that the shift from social media — sold as a golden bullet that would increase income and reach for small media outlets — has turned out to be one of the biggest killers of local media strength and sustainability. Yet now, these same media outlets are turning to AI, which is being sold to them as a way to make their work more affordable, effective and sustainable. They are ignoring the fact that they are being presented with the same narratives as before by exactly the same big tech actors. What has changed to make us trust them this time?
Project by Nana Nwachukwu — Trinity College Dublin https://nananwachukwu.com/about

Snapshot of the project taken in March 2026 - Media Capture Watch (2026). Mapping Big Tech's Influence on Journalism. https://github.com/nananwachukwu/media-capture-watch

AI – Some Evidence please
The Distributed Artificial Intelligence Research Institute - DAIR
“The Distributed AI Research Institute is an independent organization conducting community-rooted research. We are a globally distributed group of academics, activists, and engineers who believe in technology that benefits everyone.”

Snapshot of the website of the project taken in March 2026, https://www.dair-institute.org/
The Distributed AI Research Institute (DAIR) is an independent organisation with a global scope, composed of academics, activists and engineers who are dedicated to developing technology that serves everyone. DAIR’s work is firmly grounded in lived experience and community needs, ensuring that research is inclusive, principled and comprehensive, while avoiding assumptions and prioritising meaningful, actionable results. The institute prioritises the well-being of its researchers, rejecting academic burnout and fostering environments where individuals can flourish professionally and personally. DAIR focuses on building technologies that reflect diverse communities, critically examining the real harms of AI while envisioning alternative tech futures centred on care, safety, and equity. Its research covers areas such as data for change, governance frameworks, and the impact of AI systems, with a constant commitment to community-driven solutions.
Organisation is founded by Timnit Gebru - https://www.dair-institute.org/team/timnit-gebru/ in 2021
Data Workers’ inquiry
“DATA WORKERS’ INQUIRY is a global, radically participatory research initiative spanning nine countries across five continents. Here, data workers themselves become community researchers, identifying urgent issues, formulating their own questions, and choosing the formats that best tell their stories: zines, documentaries, comics, essays, podcasts, and animations.”

Snapshot of the website of the project taken in March 2026, https://data-workers.org/
The Data Workers' Inquiry is a global participatory research initiative that empowers data workers, including content moderators and data annotators, to investigate their workplaces, document urgent issues and share their experiences through creative formats such as zines, documentaries and essays. Taking inspiration from Karl Marx’s 1880 Workers’ Inquiry, the project employs the Workers’ Inquiry as a Research Methodology (WIRM) to prioritise workers' lived experiences and transform hidden, precarious labour into an arena for collective knowledge creation and advocacy. Spanning nine countries across five continents, the initiative has produced first-hand accounts exposing exploitation, mental health struggles and systemic precarity. It has also fostered organising successes, such as the Data Labelers Association and the African Content Moderators Union. The initiative's impact extends to policy, with data workers testifying at the European Parliament to influence the Platform Workers’ Directive, and to recognition for community researchers, who have received awards and funding for mental health interventions.
Project by DAIR (look above) started in 2024
The AI Incident Database

Snapshot of the website of the project taken in March 2026, https://incidentdatabase.ai/
Launched in 2020, the AI Incident Database (AIID) is a public, open-source repository that catalogues real-world harms caused by artificial intelligence systems. Serving as a searchable archive of documented AI failures, the database contains examples ranging from algorithmic bias in recruitment and autonomous vehicle accidents to deepfake scams. It aims to help researchers, developers, policymakers and civil society organisations learn from past failures and transform abstract risks into concrete case studies. It is maintained by the Responsible AI Collaborative (RAIC), a non-profit organisation established in 2022. Operating under a participatory governance model, the RAIC collaborates with various global stakeholders, including academic institutions, civil society organisations, and industry partners, to build a collective memory of AI-related harm and improve safety and accountability. To date, the database contains nearly 1400 curated incident reports contributed by journalists, researchers and the public.
Project by Responsible AI Collaborative, https://incidentdatabase.ai/about/#collaborators, 2020
Weizenbaum Institute

Snapshot of the institute taken in March 2026, https://www.weizenbaum-institut.de/en
I am not going to explain what the Weizenbaum Institute is, but if you have the opportunity to participate in their free events, please take it - https://www.weizenbaum-institut.de/en/events/ . Not all of the events are in German; many are in English. In particular, I would like to promote their annual interdisciplinary conference
If you've never heard of the Weizenbaum Institute for the Networked Society, it's an interdisciplinary research institute based in Berlin that was founded in 2017 to critically examine the societal, ethical, legal, economic and political impacts of digitalisation. Funded primarily by the German Federal Ministry of Education and Research (BMBF) and the State of Berlin, it was established through a collaboration of leading Berlin and Brandenburg research institutions, including Freie Universität Berlin, Humboldt-Universität Berlin, and the Berlin Social Science Center.
The institute addresses pressing questions such as digital participation, platform governance, knowledge organisation, and the role of digital infrastructures in democracy. Named after the renowned computer scientist Joseph Weizenbaum, who developed the ELIZA chatbot and later critiqued unchecked technological adoption, the institute champions digital self-determination, sustainability, and responsible technology design. It bridges the gap between research and practice by providing evidence-based recommendations to policymakers, businesses and civil society. The institute also publishes the Weizenbaum Journal of the Digital Society and hosts the annual Weizenbaum Conference to promote interdisciplinary dialogue.
Since we are here, I would like to recommend two articles by Rainer Rehak from the institute that I have recently shared with students: “AI Narrative Breakdown. A Critical Assessment of Power and Promise” and “Catastrophic Computation. On the Impossibility of Sustainable Artificial Intelligence”.
Weizenbaum Institute was founded in 2017

How to teach under pressure from AI
Here you will find a range of resources, guides, interactive tools and other features that will give you a solid grounding in Gen AI. The resources also include examples of how people address the fact that collaboration becomes much more difficult when AI is always involved. In order to envision how we decide when and how to use AI in teaching, it is imperative to gain some expertise first about what it is as a system and what it can do and definitely more importantly what is not capable of.
A People's Guide to AI, 2nd Edition

Screenshot of the cover of the Zine taken in March 2026, https://www.peoplesguidetotech.com/
'A People's Guide to AI' is a concise, user-friendly resource designed to demystify artificial intelligence for people of all levels of expertise. First published in 2018, the guide was created in response to the fact that AI was rapidly reshaping society, yet conversations about its impact were being dominated by a small group of insiders. The authors — artists, educators and organisers — sought to bridge this gap by offering clear explanations of AI’s history, applications, risks and possibilities. They focused particularly on those who were often excluded from the discussion, such as young people, the elderly, rural communities, migrants and busy professionals. The guide’s success reflected a widespread desire to understand AI as it transitioned from niche research to mainstream attention. We were lucky to showcase it through our exhibitions and interventions.
This updated edition addresses both the 'what' (how AI systems work) and the 'why' (their societal implications), exploring who benefits from AI, the problems it solves and what remains unchanged. The authors advocate greater public involvement and awareness, challenging the idea that only experts can influence our technological future. Their aim is to empower readers to critically engage with AI, thereby fostering a world in which these tools promote broader joy and fulfilment. This engaging guide encourages curiosity and active participation, inviting readers to explore the role of AI in culture, economics and everyday life. It's clear how much this approach aligns with ours.
Project by Mimi Onuoha with support: Mother Cyborg (Diana Nucera) aka APGT, 2024. Check their workshops
Against AI
Link: https://against-a-i.com/
“This site is a rough draft, shared to ease back to school prep. Process is product! Materials here are intended as solidarity solace for educators who might find themselves inventing wheels alone while their administrators, trustees, and bosses unrelentingly hype AI and nakedly enthuse the negative consequences for educator labor.”

Screenshot taken in December 2025, https://against-a-i.com/
There are various projects that might have 'against AI' or a similar term in their name. There are a few things about this specific project that I really like and would like to share: it addresses a problem from a very hands-on and practical perspective. We are here to teach, and our teaching space has been polluted by AI, for better or worse. So let's figure out how to challenge that, how to adapt, and how to modify our approaches. The image screenshot from the project is not incidental. Because in particular, I like this section on designing school assignments so that they teach students what you want them to learn, such as creativity, research, analysis, fact verification and the ability to summarise ideas and thoughts, without giving in to their favourite AI bot. Very Inspiring indeed and so analogue!
Project by Anna Kornbluh, Krista Muratore, Eric Hayot http://humanitiesworks.org , http://www.annakornbluh.com/teaching-2/, and http://v21collective.org/syllabus-bank/
Glossaire d'éducation populaire sur l'intelligence artificielle
Link: https://gsara.be/
“AI in Perspective is a tool primarily intended for social workers, teachers, trainers and facilitators who wish to address the issue of Artificial Intelligence with their audiences.”

Screenshot of the project site taken in December 2025, https://gsara.be/
There are a few reasons why I refer people to this project. Not only is it well produced under a CC licence, but it also addresses a wide audience. It is a very well organised set of topics taken from three perspectives on AI: applications, reflections and context. For example, it looks at how AI is/will be used, how AI looks from collective, democratic, journalistic and social perspectives, and in context section for example how AI is replicating colonial histories and modes. Each section is very simple and well organised, with a short video and a list of resources for anyone who wants to follow up. I watch and read a lot of different didactic approaches to see how others are trying to explain complicated, complex and often nerdy topics to audiences who don't need or want to learn new vocabulary and contexts, but who want to know what it” in this case AI is and what it does or can do to them more specifically.
Project by GSARA, https://gsara.be/ , 2025
Six protocols for permacomputational self-sefense against laborious computing[RYBN]
“The Protocols are a tactical toolbox against what we mistakenly call “Artificial Intelligence”, a phenomenon we prefer to refer as “Laborious Computing”

Screenshot of the project site taken in March 2026, https://carrier-bag.net/human-computers/
This is a project by the art group RYBN. I had a chance to experience one of these analogue workshops: PROTOCOL 3. HUMAN PERCEPTRON during the Re-enacting Dartmouth Conference organised by IMA in Austria last year (2025) - https://ima.or.at/en/projekt/reenacting_dartmouth/ ; All the protocols are basically group performances, they require some prep – and definitely at times are tedious, but still fun and learning experiences.
Project by RYBN, since 2016
AI Explorables
“Big ideas in machine learning, simply explained - The rapidly increasing usage of machine learning raises complicated questions: How can we tell if models are fair? Why do models make the predictions that they do? What are the privacy implications of feeding enormous amounts of data into models? This ongoing series of interactive essays will walk you through these important concepts.”

Screenshot of the project site taken in March 2026, https://pair.withgoogle.com/explorables/
This is a wide selection of interactive overviews of the different technical aspects of AI systems and their context. Topics covered include: - Datasets Have Worldviews by Dylan Baker – who also runs analogue workshops exploring fairer and more just technological futures. See this interview ; Can Large Language Models Explain Their Internal Mechanisms? - as well as many others developed by the +PAIR team at Google.
Project by PAIR (People + AI Research) first launched in 2017, PAIR is part of the Responsible AI and Human-Centered Technology team within Google Research https://research.google/blog/responsible-ai-at-google-research-pair/

Creative Interventions
The following is an overview of various interventions, ranging from straightforward artistic explorations and sabotage tools to provocations and funding schemes that support such activities. Recommended Reading – Hito Steyerl “Medium Hot – Images In The Age Of Heat”
Algorithmic Sabotage Group
“The Algorithmic Sabotage Research Group (ASRG) is a conspiratorial, practice-led research framework focused on the intersection of culture, politics, and technology. Its aim is to present and generate new tactics for action within the framework of digital culture and information technology, while highlighting interventions that provoke political and social transformation.”

Screenshot of the project site taken in February 2026, https://algorithmic-sabotage.gitlab.io/asrg/
If you are interested in subversive and dissident practices, decolonisation, feminist and creative counter-power, this is one of the places where you should start your journey.
They offer workshops (one for now), and set of tools/interventions that can help entrapping AI bot, obfuscate your content and make it harder for automated scraping.
They also have a great repository of other tools available out there on the net. Check here
Anonymous project by ASRG - https://algorithmic-sabotage.gitlab.io/asrg/about/ , 2025
Slop Evader
“A browser extension for avoiding AI slop. Available for Chrome or Firefox.”

Screenshot of the project site taken in February 2026, https://tegabrain.com/Slop-Evader
This is a very simple project that helps us avoid AI-generated content using this browser extension for Chrome and Firefox. It filters search results to only show content published before the public release of ChatGPT on 30 November 2022, ensuring that you only access human-created text, images and videos — free from the growing pollution of AI slop. Tega used to called herself Eccentric Engineer, not sure she still does – but it is on youtube. While we are here – I don’t want people assume this is all Tega does – we were lucky to collaborate with her many times and I would use this moment to make one more suggestion and advice to look at her project Asunder from 2019 that she realised together with Julian Oliver, and Bengt Sjölén (members of the Critical Engeneering Group) - https://tegabrain.com/Asunder - Asunder presents an AI environmental manager which models future planetary interventions to maintain ecological balance. This is visualised through a three-screen dashboard powered by a custom 144-CPU supercomputer which runs the CESM climate model.
Project by Tega Brain - https://tegabrain.com/, 2025
QUILI.AI – Analog Intelligence
Link: https://www.quili.ai/
“The story of a community on a mission to save its water”

Screenshot of the project site taken in January 2026, https://www.quili.ai/
That was a project and an action. On 31 January 2026, more than 40 people from Quilicura answered over 25,000 prompts between 8 am and 8 pm. In other words, it was a new realisation of Mechanical Turk, except that the people were not anonymous, the purpose was clear, and the community of Quilicura offered their time to address the prompts with humans – no AI was involved. I asked one question, which was to explain a specific organisation's strategy. It took some time, but I finally received a concise, single-sentence answer that was spot on and satisfying – something I would not expect from generative AI. Where is Quilicura? It is in Chile, an area affected by the highest proliferation of water-hungry data centres in the region. It is well worth watching the promotional video they created.
Project by Corporación NGEN a Chilean environmental organization based in Quilicura, a community in the Maipo River Basin, one of the most water-stressed regions in Chile, 2026
Error 417
“Error 417 Expectation Failed is an independent foundation supporting radically contemporary Internet art and net-based arts practices. We encourage open-ended formats, risky projects and explorative artworks that engage critically with contemporary technology. Our focus lies on art that addresses the frictions between technology, aesthetics, politics and social relations to challenge power structures — one hack, glitch, fail, error, process and experiment at a time.”

Screenshot of the project site taken in March 2026, https://error417.expectation.fail/
This is a new initiative conceived to support creative and research projects that might not otherwise receive funding to develop their ideas. Take a look at this list of projects that have recently been awarded funding for example My Data Is Too Dirty For Your model by Jiawen Uffline
Error 417 Expectation Failed is an independent foundation launched in 2025 by the artists !Mediengruppe Bitnik https://wwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwww.bitnik.org
Have I Been Trained
(Beware: it has been under maintenance for a while, but I still think it's worth mentioning) It was a search tool that allowed artists to check which Gen AI tools might have used their work to train their models.

Screenshot of the project site taken in March 2026, https://haveibeentrained.com/
The project comprised a variety of tools, including a search function and a 'Do not train' registry. It was first taken offline in 2024 due to research proving that content searched by the tool could lead to access to child sexual abuse material (CSAM). This issue was promptly addressed by the authors. This project is an excellent example to consider when designing countermeasures for black boxes. I am not aware of the reasons behind these services being in maintenance mode for such a long time. You can see what it looked like before it went into maintenance mode here. However, the authors have been busy exploring issues around data ownership. I would refer you to two of their projects in this area: the book All Media is Training Data](https://shop.serpentinegalleries.org/products/holly-herndon-mat-dryhurst-all-media-is-training-data), and the research they conducted with the Serpentine Gallery, which you can access here: 'Choral Data "Trust" Experiment White Paper'
Project(s) by Holly Herndon & Mat Dryhurst https://herndondryhurst.studio/ it was released in 2022
(If you're interested in exploring the issues related to generative AI and copyright, here's some recommended readings for you: 'Copyright Law in the Age of AI: Analysing AI-Generated Works and Copyright Challenges in Australia' by Nirogini Thambaiya, Kanchana Kariyawasam & Chamila Talagala, ISSN: 1360-0869 (Print) 1364-6885 (Online) Journal homepage: https://www.tandfonline.com/doi/full/10.1080/13600869.2025.2486893
Eyes On AI
Link: https://eyeson-ai.org/
“Learn about the ways AI-powered surveillance impacts your life and community. Learn about the ways you can fight back.”

Screenshot of the project site taken in March 2026, https://eyeson-ai.org/
This is a game (respecting your privacy) and a guide – both explore waste spectrum of how AI tools are or can already be used for surveillance. The project is one of many they have done since 2020 - https://genzforchange.org/our-work
Project by Gen-Z For Change - https://genzforchange.org/index look them up, 2026

Reverse engineering
Despite the openness claims of AI companies, most of the generative AI we encounter is a black box. However, there is always a way to open it up: reverse engineering. This begins with creative thinking about how to design a process that reveals the inner workings of closed systems. Recommended reading on this topic is Cory Doctorow's “The Reverse Centaur's Guide to Criticising AI”
AI Forensics
Link: https://aiforensics.org/
“AI Forensics is a European non-profit that investigates influential and opaque algorithms. We hold major technology platforms accountable by conducting independent and high-profile technical investigations to uncover and expose the harms caused by their algorithms.”

Screenshot of the project site taken in December 2025, https://aiforensics.org/
AI Forensics is an inspiring European non-profit organisation dedicated to investigating the hidden workings of influential — and often opaque — algorithms. Since 2016, they have been developing cutting-edge auditing tools and methodologies to empower researchers, journalists and policymakers with evidence of algorithmic harm. Through innovative techniques such as sock-puppeting and data donation, AI Forensics mimics user behaviours and collects behavioural data to expose systematic violations of digital rights, particularly those affecting marginalised communities. Their multidisciplinary team then transforms these findings into actionable insights, driving impact through high-profile press coverage, policy recommendations, and strategic litigation. Their mission is clear: to hold tech platforms to account, challenge algorithmic injustices, and ensure technology serves the many, not the few.
AI Forensics is a project founded in 2021 by Claudio Agosti, Marc Faddoul and Salvatore Romano https://aiforensics.org/about
Eticas AI
Link: https://www.eticas.ai/
“Eticas has pioneered independent AI assurance since 2012, helping organisations worldwide build systems that are fair, safe, and accountable.”

Screenshot of the project site taken in December 2025, https://www.eticas.ai/
Eticas AI is two things - a hybrid - a company and a non for profit Foundation. The company - Eticas AI is a venture-backed company focused on AI safety, auditing, and responsible innovation. The company develops the ITACA platform, an automated solution that enables the continuous detection of risks, analysis of impacts, and monitoring of AI systems. The platform helps organisations to identify algorithmic bias, ensure fairness and comply with over 15 legal frameworks. Eticas AI serves clients in the healthcare, finance, government, education and cybersecurity sectors, helping them to turn AI risk into reliability and compliance into a competitive advantage. Headquartered in New York, USA, the company emphasises transparency, accountability, and trust in AI systems, positioning itself as a leader in ethical AI auditing.
The Eticas Foundation, a 501(c)(3) non-profit organisation operates alongside Eticas AI. It focuses on real-world impact research, community-driven audits, and public interest AI. The Foundation investigates the impact of AI systems on marginalised communities, conducts qualitative and quantitative audits, and develops technical and social impact standards to promote fair, auditable and inclusive AI.
Eticas both the consultancy and foundations are projects of Gemma Galdón-Clavell started in 2012 - https://ai.northeastern.edu/our-people/gemma-galdon-clavell

Data about AI Data Centres
Lastly, although we all love maps, their use is monopolised by a select few. Nevertheless, geopolitical maps are helpful for zooming out and viewing the bigger picture. I often set my students the task of analysing maps, particularly ones that share data freely. This allows you to use the data to figure out whatever you want, identifying connections and analysing numbers with your favourite tools. Sadly, many resources are not open source. I don't have many options here, which is why this is almost the last chapter of the article, with one exception: a set of unverifiable maps. Please let me know if you know of anything better. Recommended reading: The AI climate hoax by Ketan Joshi and the investigation from Business Insider from September 2025 - “See where data centers are across the US on our interactive map”, plus Nieman Lab article [“As AI data centers scale, investigating their impact becomes its own beat.”[(https://www.niemanlab.org/2026/03/as-ai-data-centers-scale-investigating-their-impact-becomes-its-own-beat/)
Epoch.ai Frontier Data Centers

Screenshot of the project site taken in February 2026
At first glance, this may not look great, as the map only covers AI data centres for which they have satellite images. However, there is much more to it. Epoch's Frontier Data Centres Hub is an independent database that tracks the construction timelines of major US AI data centres using high-resolution satellite imagery, permits and public documents. In particular, it provides valuable insights into the energy usage, cooling systems, and timelines of these centres.
Project by - Epoch AI - is a multidisciplinary non-profit research institute investigating the future of artificial intelligence. It is up to date until the end of February 2026, as far as I know.
FracTracker Alliance Map
Link: https://ft.maps.arcgis.com/apps/instant/sidebar/index.html?appid=fdb7678fb2e345eb8b0a3a49971240c4
Since half of the world's data centres are located in the US, so here we go.

Screenshot of the project site taken in March 2026, https://ft.maps.arcgis.com/apps/instant/sidebar/index.html?appid=fdb7678fb2e345eb8b0a3a49971240c4
This map is partly crowdsourced. It was originally a university project and is currently run by a non-profit organisation. The non-profit's main focus is not AI data centres, but rather monitoring and exposing the risks of oil, gas and petrochemical development in order to promote just energy alternatives that protect public health, natural resources and the climate.
Project by FracTracker Alliance, 2025 https://www.fractracker.org/2025/07/national-data-centers-tracker/
DCMAP
Link: https://dcmap.jatevo.ai/
'Every data centre on Earth is mapped. The open-source, real-time alternative to Bloomberg's $24,000-per-year Datacenter MAP tool. Free forever.” This is what Jatevo say, and they are an AI infrastructure thing built on blockchain...

Screenshot of the project site taken in February 2026, https://dcmap.jatevo.ai/
This is supposedly an open-source project, and the data is presented in an easily processable format. However, further exploration reveals some strange findings. For example, an OpenAI centre in Wisconsin appears to be located in China, which could be a glitch. Nevertheless, it is a nice presentation in various formats.
Jatevo started the project in around 2023 as a one-person initiative using some AI agents. But I could be wrong. I could be very wrong here. Try it at your own risk.
Baxtel
Link: https://baxtel.com/map
This is a non-open-source database and map. It is a strictly commercial product, but the online map is free for anyone to explore. And it is extensive. And it has been around for a while.

Screenshot of the project site taken in February 2026, https://baxtel.com/map
The Baxtel map does not focus specifically on AI data centres. However, it shows the locations of 9,100 data centres around the world. Users can explore the locations of nearby centres and access further information about ownership, location, and energy usage, which could be useful for subsequent investigations. The map also illustrates the scale of the issue of the unprecedented growth of the sector and its locations.
They say that “Project Baxtel is a platform for data centre research, advisory services and procurement, with 25,000 monthly users”. Deep Diving - only available upon payment approval.
Project running since 2015. Founded by Eric Bell.
Data Centre Map
Yet another non-open source data centre map – different interface “Launched in 2007, Data Center Map was the first research tool of its kind. We operate a global data center directory, mapping data center locations worldwide. Our intention is to make it easier for buyers, sellers, investors, regulators and other professionals working with the industry to gain insights into the markets of their interest.”

Screenshot of the project site taken in February 2026, https://www.datacentermap.com/
Again a data base focused on Data Centres – with no specific focus on AI – it is a commercial tool that enables you to explore data centre by data centre with pretty detailed information about each of them. That includes its capacities, who they owned by, who is hosted there etc. But again commercial. A good starting point at least.
Project initially started by Data Centre Research in 2007

Beware of AI Smog-ish stuff
Here are two examples of resources that might seem intriguing at first, but are they really worth your time? Nevertheless, they are good discussion starters. I'm listing them here at the end as they are circulating around – but maybe they shouldn't be. Definitely not endorsing!
Not By AI
Link: https://notbyai.fyi/
“The Not By AI® badges are created to encourage more humans to produce original content and help audiences identify human-generated content. The ultimate goal is to make sure humanity continues to advance.”

Screenshot of the project site taken in March 2026, https://notbyai.fyi/
This one looks interesting, but there are some problems with it. It presents itself as a a forward-thinking organization of creators wanting to signal that their content is created without the use of AI. However, it is not a movement; it is a business selling expensive badges. Anyone can buy a badge and stick it on their content, as a rule of thumb you need to claim that 90% of your content is not AI-generated. https://notbyai.fyi/not-by-ai-90-rule The owner also runs AI design outfit.
Project by Allen Hsu, a Philadelphia-based designer and creator. Started in 2023.
The Pro-Human AI Declaration
The Future of Life Institute launched this initiative in March 2026, which was supported and endorsed by various political, religious and civil society leaders.

Screenshot of the project site taken in March 2026, https://humanstatement.org/
I am looking forward to using this example to work with students on declarations that, at first glance, appear to be things we would want to be associated with. They advocate keeping humans in charge, avoiding the concentration of power, protecting the human experience, standing for human agency and liberty, and holding AI companies accountable. And then second on the list of endorsers is Steve Bannon. The Future of Life Institute itself stands for long-termism. So here is another recommended reading: “Against Longtermism”, to provide some context. For a deeper analysis of this declaration, check out “Nothing to Declare” by Tante.
Project by he Future of Life Institute, March 2026
That’s that for now.
All images of pebbles and sand by author – from the series: man with the compound eyes, Taiwan 2025