Tag: “hallucination”

  • AI synthetic data: training models without breaching privacy

    AI synthetic data: training models without breaching privacy

    How can telcos use AI-generated synthetic data to fuel machine learning? Telecommunications companies are sitting on a huge volume of data. Call records, location pings, browsing sessions, and usage patterns can all paint a remarkably detailed picture of how millions of people move through their lives. But regulations like GDPR and CCPA, plus an ever-expanding patchwork of local data residency laws, mean telcos are limited in how they can use much of this data for things like AI and ML projects. Synthetic data, however, could be a workaround. Instead of piping real customer records into machine learning pipelines, telcos are increasingly generating artificial datasets that statistically mirror actual customer behavior without containing real data points. The idea is simple enough — algorithms learn the patterns, distributions, and correlations baked into real data, then spin up entirely new records that preserve those statistical properties while being completely fabricated. Models trained on synthetic data let telcos build and iterate on network optimization, churn prediction, personalized services, and predictive maintenance — none of which requires exposing actual customer information to breach risk or the weight of privacy law. It’s not a perfect solution, and there are genuine trade-offs involved, but for an industry that’s simultaneously heavily regulated and increasingly reliant on AI, synthetic data is one of the most practical paths available right now. How synthetic data generation works Deep learning generative models are the most sophisticated tools available for capturing the complex behavioral dynamics telcos actually care about. These are neural network architectures built to learn the underlying structure of real datasets and reproduce it convincingly. GANs, or Generative Adversarial Networks, are probably the most widely recognized approach. Two neural networks compete with each other — a generator produces synthetic data while a discriminator tries to tell whether the output looks real. That push-and-pull forces the generator toward increasingly realistic records over successive training rounds. GANs shine when it comes to complex, multivariate sequences — exactly the kind of data you’d encounter in location tracking or communication pattern analysis, where multiple variables interact across time. Variational Autoencoders, or VAEs, work differently. They compress real data down into a compact latent representation and then decode it back out as synthetic samples. That compression-decompression cycle is particularly good at capturing probabilistic variation and maintaining structural smoothness, which makes VAEs a strong fit for generating slightly varied behavioral patterns while keeping statistical integrity intact. GANs tend to produce sharper, more specific outputs, while VAEs lean toward smoother, more broadly distributed data. Each has its sweet spot depending on what you’re trying to accomplish. Transformer models, including GPT-based architectures, are also part of the picture. These can process structured customer logs and usage records, learning the relationships and patterns within them. They’re effective for generating task-specific synthetic records with prompt-driven control, letting engineers specify exactly what kind of data they need. The caveat is that transformer-generated outputs often need additional validation to confirm the results are statistically grounded rather than just plausible-sounding. Not everything demands deep learning, though. Rule-based generation still has a role, and sometimes it’s the more appropriate choice. Simulation models replicate real-world processes using predefined rules and variables. Data transformation techniques apply mathematical operations to existing records to create new synthetic data points. Markov chains generate sequential data where each value depends on the previous one — a natural fit for time-series events like location traces or communication session logs. These methods lack the flexibility of neural network approaches, but they’re cheaper, easier to interpret, and in many cases perfectly sufficient for the job. Privacy preservation The reason synthetic data works as a

  • The Natural Way of Things

    The Natural Way of Things

    In a patch of sunlight Verla sits on a wooden folding chair and waits. When the door opens she holds her breath. It is another girl who comes into the room. They lock eyes for an instant, then look away to the floor, the walls. Article continues after advertisement The girl moves stiffly in her weird costume, taking only a few steps into the room. The door has closed behind her. The only spare chair is beside Verla’s, so Verla gets up and moves to the window. It is too much, that she be put so close to a stranger. She stands at the window, looking out through a fly-spotted pane at nothing. There is bright sunlight coming into the room, but only reflected off the white weatherboards of another building just metres away. She presses her face to the glass but can see no windows anywhere along the length of that building. She can feel the other girl behind her in the room, staring at her peculiar clothes. The stiff long green canvas smock, the coarse calico blouse beneath, the hard brown leather boots and long woollen socks. The ancient underwear. It is summer. Verla sweats inside them. She can feel it dawning on the other girl that she is a mirror: that she too wears this absurd costume, looks as strange as Verla does. Verla tries to work out what it was she had been given, scanning back through the vocabulary of her father’s sedatives. Midazolam, Largactil? She wants to live. She tries wading through memory, logic, but can’t grasp anything but the fact that all her own clothes—and, she supposes, the other girl’s—are gone. She blinks a slow glance at the girl. Tall, heavy-lidded eyes, thick brows, long black hair to her waist is all Verla sees before looking away again. But she knows the girl stands there dumbly with her hands by her sides, staring down at the floorboards. Drugged too, Verla can tell from her slowness, her vacancy—this runaway, schoolgirl, drug addict? Nun, for all Verla knows. But somehow, even in this sweeping glance, the girl seems familiar. She understands fear should be thrumming through her now. But logic is impossible, all thinking still glazed with whatever they have given her. Like the burred head on a screw, her thoughts can find no purchase. Article continues after advertisement Verla follows the girl’s gaze. The floorboards glisten like honey in the sun. She has an impulse to lick them. She understands that fear is the only thing now that could conceivably save her from what is to come. But she is cotton-headed, too slow for that. The drug has dissolved adrenaline so completely it almost seems unsurprising to be here, with a stranger, in a strange room, wearing this bizarre olden-day costume. She can do nothing to resist it, cannot understand nor question. It is a kind of dumb relief. But she can listen. Verla strains through her sedation. Somewhere beyond the door is the judder of some domestic motor—a fridge, maybe, or an air-conditioning unit. But the place is stinking hot, primitive. She has no idea where they are. The room is large and light. There are the two wooden folding chairs—empty, the other girl did not sit—against a wall painted milky green, and a blackboard at the other end of the room with a rolled vinyl blind high up at the top of the board. Verla knows without knowing that if she tugged on the ring dangling from the centre of the blind she would pull down a map of Australia, coloured yellow and

  • Can We Teach Civil Discourse in a Digital Age?

    Can We Teach Civil Discourse in a Digital Age?

    Hello, I’m Tom Vander Ark. Lately, I’ve been thinking a lot about how hard it has become to talk across differences in America, especially when the issues are complex, emotional, and constantly amplified by social media. We ask young people to navigate that environment every day, yet we rarely give them the time, tools and permission to practice the skills that make real dialogue possible: building an argument on shared evidence, listening with curiosity, and staying in relationship even when disagreement is real. That’s why this conversation matters, not as a ‘nice to have,’ but as a core part of preparing students for civic life and for the kind of community-building our schools are uniquely positioned to support. I sat down with Dr. Vikki Katz, a communications professor at Chapman University and the director of the new OR Initiative. We talked about the roots of her work, shaped by growing up in South Africa during the dismantling of apartheid, and how that experience informs her optimism about what’s possible when people commit to rebuilding trust. We also explored what civil discourse can look like in middle school and high school classrooms, how educators can be supported through system-wide leadership and professional learning communities, and why digital discernment and civil discourse have to be taught together in an AI-accelerated world. Introduction & Origins Tom Vander Ark: We’re talking about civil discourse today: what it is, why it’s important, and how to teach young people something that we’re not very good at as adults in America. We have the pleasure of speaking with Dr. Vikki Katz. She’s a communications pro at Chapman University. Vikki, what a treat to have you join us. Dr. Vikki Katz: Thank you for having me. Tom Vander Ark: Where did your interest in civil discourse come from? Dr. Vikki Katz: My interest in civil discourse goes all the way back. I grew up in South Africa through the 1980s and early 1990s and was old enough to understand what was happening as apartheid was dismantled, Mandela was released from prison, and South Africans came together to build and reimagine a country. The ways in which South Africa did that really left an imprint on me. I joke that it has made me pathologically optimistic about what is possible, even when it looks impossible. And that really has always been one of the engineering lessons that underlies the work that I’ve done in these kinds of areas. Tom Vander Ark: It’s a beautiful origin story. My last trip to South Africa reminded me that my Dutch Calvinist roots include a lot of culpability for setting up a system that prevented civil discourse. And so that’s one thing I think about when I visit. Tom Vander Ark: South Africa, Vikki, you launched an initiative called the OR Initiative to bring this set of skills—civil discourse skills—to middle school, high school and college students. Tell us about that. Dr. Vikki Katz: So we officially launched in early February 2026, but we spent a year doing foundational work to really listen on the things that we were interested in so that we wouldn’t start building solutions without making sure that we were building things that people really needed. We’ve launched the OR Initiative at Chapman University as a new program, and our mission is to help educators prepare our young people to become more digitally discerning and make it easier for them to build robust and shared evidence bases so that they can learn to engage in civil discourse on a shared base, and be able to get braver about talking

  • SPIDER-MAN: BRAND NEW DAY Rumor Points To This Iconic Comic Book Moment Being Recreated

    SPIDER-MAN: BRAND NEW DAY Rumor Points To This Iconic Comic Book Moment Being Recreated

    Spider-Man: Brand New Day may be the year’s most highly anticipated movie, but as another day goes by, we still don’t have a trailer for the web-slinger’s long-awaited MCU return.
    Sony Pictures appears to be taking a similar approach to how Spider-Man: No Way Home was marketed in 2021. While that’s frustrating for fans, the blockbuster grossed $1.9 billion at a time when theaters were still struggling with the damage caused by the pandemic, so the approach is understandable.
    Promo art leaks have offered fans an early look at Spider-Man: Brand New Day’s villains, while rumours about what to expect from the movie continue to swirl. The latest is a little hard to believe, but gives us something to chew on as the long wait for a teaser continues.
    According to insider @MyTimeToShineH, there are rumblings that Tom Holland’s wall-crawler dies in the movie. They later followed that up by sharing the cover of Web of Spider-Man #32, part of the iconic Kraven’s Last Hunt storyline, and hinted that we’ll see that moment recreated on screen this summer.
    Now, that does admittedly sound far-fetched, but there are a couple of ways for Marvel to recreate this beloved piece of imagery in Spider-Man: Brand New Day.
    The first would be during a poison-induced hallucination sequence caused by The Scorpion. We’ve heard rumblings about the movie featuring a scene like that, so it’s the likeliest possibility (another is that Peter Parker has nightmares about his future death while in a web coccoon expected to give him new abilities like organic webbing).
    Another is that someone actually buries Spidey, believing him dead, only for the hero to fight his way out of that and back into the land of the living.
    For now, though, we’ll have to wait and see whether this pans out when Spider-Man: Brand New Day heads our way later this year.
    In Spider-Man: Brand New Day, four years have gone by since we last caught up with our friendly neighborhood hero. Peter Parker is no more, but Spider-Man is at the top of his game, keeping New York City safe. Things are going well for our anonymous hero until an unusual trail of crimes pulls him into a web of mystery larger than he’s ever faced before.
    In order to take on what’s ahead, Spider-Man not only needs to be at the top of his physical and mental game, but he must also be prepared to face the repercussions of his past!
    Shang-Chi and the Legend of the Ten Rings helmer Destin Daniel Cretton directs Spider-Man: Brand New Day from a script by returning Spider-Man franchise writers Chris McKenna and Erik Sommers.
    Tom Holland plays Spider-Man in a cast that also includes Jon Bernthal (The Punisher), Mark Ruffalo (The Hulk), Zendaya (MJ), Sadie Sink, Michael Mando (The Scorpion), Tramell Tillman, Marvin Jones III (Tombstone), Jacob Batalon (Ned Leeds), and Liza Colón-Zayas. Avengers: Doomsday star Florence Pugh is expected to reprise her Thunderbolts* role as Yelena Belova.
    Spider-Man: Brand New Day will be released in theaters on July 31, 2026.

  • The algorithm’s fatal flaw: Was an AI error responsible for the massacre of 160 schoolgirls in Iran?

    The algorithm’s fatal flaw: Was an AI error responsible for the massacre of 160 schoolgirls in Iran?

    The world is in shock following the tragic events that occurred in the southern Iranian city of Minab on February 28. During the school’s routine morning class at the Shajareh Tayyebeh Girls’ Primary School, a missile strike occurred, resulting in the death of over 160 schoolgirls aged 7 to 12 years. The dust is yet to settle following the tragic incident in one of the deadliest operations of the 2026 Iran War. Was this tragedy the result of a calculated strike, an accident in the heat of the war, or an error in Artificial Intelligence? The target: Why was the school in the crosshair? Add Zee News as a Preferred Source The strike occurred in the midst of an extensive joint aerial campaign by the US and Israel. Although the Iranian government claims that the death toll is as high as 180, US officials claim that they do not intentionally target such facilities. In fact, the US is “investigating” the matter. Initial findings indicated that the technical overlap is complex: The military legacy: The location of the school was once a confirmed IRGC base. The proximity issues: Although the IRGC location had been decommissioned to accommodate the school, military installations are in the immediate area. The strategic error: Sources point to the fact that the missile was meant for the nearby active military complex but ended up in the school instead. ‘Algorithm warfare’: The new face of combat In 2026, the choice of target is no longer the sole prerogative of human military strategists. The US military, often in partnership with AI leaders such as Anthropic and OpenAI, uses sophisticated “targeted systems.” These systems scan satellite images, drone footage, and telecommunications signals to detect threats in seconds. The phenomenon is now known as “Algorithm Warfare.” Although AI systems are capable of processing thousands of targets in real time, the Minab disaster points to the catastrophic dangers of the “human in the loop” concept being eliminated. Three ways AI can commit a deadly error From a technical standpoint, if an AI system were responsible for the Minab strike, then it is probable that the system made one of the following errors: Outdated training data: If the AI system’s database indicated that the coordinates belonged to an active IRGC base, based on historical imagery, then the system would have indicated that the building was a high-priority target for the Iranian military, without the knowledge that the building had been repurposed as a school several years ago. Contextual misinterpretation: Another problem that AI systems have is that they cannot distinguish between civilian architecture and military architecture, especially in densely populated urban areas. If an AI system indicates that there is a secure perimeter or other electronic signatures near a school, then the system would likely indicate that the entire block is a combat zone. Automated over-reliance: Much of the latest weaponry utilizes an “auto-pilot” system. If the final authorization for the strike is left up to the machine, without human intervention reviewing the video feed, then the machine cannot see the children in the classroom. The precedent: From Gaza to Minab The debate surrounding the use of AI in war is nothing new. In past conflicts, the likes of Israel’s “Lavender,” an AI designed to develop target lists, were under the microscope for their accuracy and the “collateral damage” of civilian loss of life. In the tragedy of Minab, we must remember that technology, no matter how efficient, cannot replicate the judgment of the human heart. Whether the event was an act of war or simply the hallucination of technology gone wrong, the

  • The Kitchen Sink for March 6, 2026: Legal Tech Trends

    The Kitchen Sink for March 6, 2026: Legal Tech Trends

    This week’s kitchen sink for March 6, 2026 (with meme from Gates Dogfish) discusses layoffs due to AI, the ‘practical ability’ standard, Anthropic’s sudden popularity & more! Why ‘the kitchen sink’? Find out here! 🙂 The Kitchen Sink is even better when you can include a brand-new eDiscovery meme courtesy of Gates Dogfish, the meme channel dedicated to eDiscovery people and created by Aaron Patton. For more great eDiscovery memes, follow Gates Dogfish on LinkedIn here! I don’t even know what to say to that! 🤣 Advertisement Here is the kitchen sink for March 6 of ten-ish stories that I didn’t get to this week, with a comment from me about each: We’re up to 1,002 AI hallucination cases and counting. But as I discussed in this post, hallucinations by US lawyers aren’t as bad as you think. A ‘Practical Control’ Decision Rejects the ‘Legal Right’ Standard : Michael Berman covers a case on the EDRM blog that involves possession, custody and control and ‘applied the ‘practical ability’ standard that I favor’ over the ‘legal right’ standard. The issue was whether Wellpath’s employees had possession, custody, or control over Wellpath’s Report (Wellpath went bankrupt and was no longer a party). Find out what happened here! Block lays off 40% of workforce as it goes all-in on AI tools : Block, the fintech group headed by Twitter cofounder Jack Dorsey, will cut its workforce by ‘nearly half’ in one of the clearest signs of the sweeping changes AI tools are having on employment. The bad sign for employees everywhere? Shares in the payment company soared more than 25 percent in after-hours trading. Ruh-roh. Advertisement The Pricing Pulse: Generative AI-Assisted Review Insights from the Winter 2026 eDiscovery Pricing Survey : After publishing insights on Forensic Collection, Examination, and Testimony, Data Processing, Hosting, and Project Management, and Document Review, Rob Robinson is rolling out the results of the Generative AI-Assisted Review portion of the survey. I’m sure nobody cares about that one. 😉 Pentagon Got Help From Claude in Iran : ‘Ya fired, Anthropic! But, before you go, can you help us with starting this war?’ 😉 Anthropic’s Claude is suddenly the most popular iPhone app following Pentagon feud : Say, you don’t think the two things are really related, are they? 😉 Alabama man pleads guilty to hacking, extorting hundreds of women : A cautionary tale on how clever and devious some cybercriminals are. A 22-year-old Alabama man pleaded guilty to extortion, cyberstalking, and computer fraud charges after hijacking the social media accounts of hundreds of young women (including minors) over a three-year period. Which means he started when he was either 18 or 19. He impersonated the targets’ friends and used other tactics to trick his victims into handing over account recovery codes and passwords, then used the stolen credentials to take control of their Snapchat, Instagram, and other social media accounts. Another guy also pleaded guilty to hacking nearly 600 women’s Snapchat accounts using social engineering to steal private nude photos that he later sold or traded online. Yeesh! ChatGPT Health Tool Isn’t So Great in a Crisis : A tool billed as a way to plug your medical records into ChatGPT and receive health advice is drawing sharp warnings from researchers: it underestimated the urgency of care in just over half of cases where doctors said hospital treatment was needed immediately. It’s just a flesh wound! 🤣 Shift Happens: 5 Ways to Handle Change as a Lawyer : See what they did there? 😉 Seriously, though, this is a nice, short and to the point article discussing five ways

  • The Hindu: How AI Powers Data Journalism & Investigations at Scale

    The Hindu: How AI Powers Data Journalism & Investigations at Scale

    In recent months, The Hindu, a leading Indian newspaper, has significantly expanded its data journalism capabilities through the integration of large language models (LLMs). This isn’t about automating writing, but about accelerating investigations, processing vast datasets and building interactive tools with greater efficiency, according to Srinivasan Ramani, Deputy National Editor and Senior Associate Editor at The Hindu.
    One major undertaking involved analyzing data from India’s Special Intensive Revision (SIR) of voter rolls. Authorities released records detailing voter deletions and the stated reasons. The team processed approximately 22 million records across three states – Bihar, Tamil Nadu, and West Bengal – which were initially provided as image-based PDFs in Hindi.
    The workflow involved using optical character recognition (OCR) to convert the images into machine-readable text, translating the text into English, and storing the data in databases. Ramani’s team then utilized LLMs to generate SQL queries using natural language prompts, eliminating the need for manual database coding. As reported by WAN-IFRA, this process revealed patterns, such as a disproportionate number of women being deleted from voter rolls in Bihar despite higher male out-migration, and inconsistencies in the reasons cited for deletions.
    These findings were discussed in Parliament and prompted some corrections to voter rolls in Bihar following public scrutiny and ground reporting.
    The Hindu also employed LLMs to build interactive maps for the 2019 and 2024 general elections, allowing users to filter results by region, state, and other criteria. Remarkably, Ramani stated that he did not write a single line of code for these applications. According to Archyde, the entire application was built over two weeks using prompts in ChatGPT, Gemini, and Claude.
    The team broke down the interface into components and used the models to generate annotated code for each, enabling verification. This significantly reduced the time required compared to previous methods that relied on in-house engineers or volunteers.
    Beyond digital analysis, The Hindu used AI-assisted guidance to assemble low-cost Arduino-based devices to measure heat stress experienced by workers in Chennai. These devices recorded temperature and humidity every 10 seconds, providing data for a cook, a fisherman, an industrial worker, and an autorickshaw driver. The results revealed significant variations in heat index exposure, peaking at 69°C (156.2 F) in one instance.
    Following publication of these findings, the Tamil Nadu government announced a heat management plan and explored using similar devices for further study.
    Ramani emphasizes that AI tools are integrated into an established data journalism pipeline, assisting with tasks like web scraping, document processing, query generation, and front-end development. Though, he stresses that human oversight remains crucial. He describes AI as ‘a very sophisticated intern,’ capable of executing tasks precisely but requiring human direction and control.
    He cautions against relying on AI for editorial conclusions, arguing that the risk of hallucination is lower in structured tasks where outputs can be directly tested.
    The Hindu’s data journalism efforts have evolved over the past decade, from visual add-ons to traditional reporting to a dedicated function with data journalists, designers, and editorial coders. A notable past project included an excess deaths analysis during the COVID-19 pandemic, which estimated that official death counts were significantly underreported.
    Ramani notes that data-driven reporting is now integrated across all operations, leading to increased subscriptions and engagement. He believes that AI expands the scale at which journalistic judgment can operate, ultimately contributing to a more informed audience.

  • Police chief ‘absolutely determined’ to learn lessons from Maccabi away fans ban

    Police chief ‘absolutely determined’ to learn lessons from Maccabi away fans ban

    Acting Chief Constable Scott Green issued a full and sincere apology in January for failings identified by His Majesty’s Chief Inspector of Constabulary, including an ‘AI hallucination’ contained in a police report submitted to Birmingham’s Safety Advisory Group (SAG). Maccabi supporters were barred from travelling to a Europa League game at Villa Park in November, following an SAG decision which cited safety concerns based on a report prepared by the West Midlands force. Acting Chief Constable Scott Green, of West Midlands Police (Matthew Cooper/PA) The Chief Inspector of Constabulary’s review, published two days before the retirement of former chief constable Craig Guildford, found eight ‘inaccuracies’ in the police report and said it had overstated Maccabi supporters’ role in disorder in Amsterdam in 2024. The inaccuracies included a reference to a non-existent game between Maccabi and West Ham, said to have been produced by Microsoft Copilot. During media interviews on Wednesday, his first since taking over from Mr Guildford, Mr Green said: ‘On my first day in office I took three important steps. ‘The first one was to apologise for the damage that West Midlands Police had caused to the confidence that the public have in us and particularly to the groups affected. ‘The second thing that I did was I did an immediate voluntary referral to the Independent Office of Police Conduct about the conduct of the officers and staff involved, in particular the senior officers in that decision-making. ‘And then the third thing that I did was launched an operation called Operation Strive, which is our recovery plan to make sure that we learn all of the necessary lessons from the planning and policing of that fixture.’ Asked what practical steps he had taken to learn lessons from the handling of the fixture, the Acting Chief Constable said: ‘One of the first things that I did within my first week is met with a number of members of the community, particularly the Jewish community. ‘I spent last week at a community iftar event listening to our Muslim community. A large-scale policing operation took place for the Aston Villa-Maccabi fixture (Jacob King/PA) ‘I’m doing the same thing at the end of this week to draw the lessons and experiences from the communities.’ He added of the two ongoing inquiries into issues surrounding the decision to ban Maccabi supporters: ‘One of the challenges with this is, of course, there are still two reports to be published, so I do need to be careful what I say. ‘I don’t want to prejudice those reports and I don’t want to be unfair to the officers and staff that might be subject of investigation. ‘What I’m clear on, though, is that the force should have engaged with communities more in the run-up to the decision-making, and we should have been more precise with our use of intelligence. ‘I am absolutely determined that the force will learn its lessons from it.’ He said of the AI-powered Copilot assistant: ‘I took the decision within my first day to turn it off. It was a limited pilot. ‘We’re still scoping some of the lessons that are being learned, and until in particular the Independent Office of Police Conduct have given their view on it, we’re going to leave it switched off for the time being. ‘What I’m clear on, though, is that artificial intelligence, AI, forms part of all of our lives. It will form a part of all of our working lives. Police officers watch over protesters outside Villa Park during Maccabi’s Europe League visit to Birmingham (Joe Giddens/PA) ‘We do actually already use AI in

  • Some People See Aliens While on DMT. Researchers Want to Find Out What They Can Teach Us

    Some People See Aliens While on DMT. Researchers Want to Find Out What They Can Teach Us

    A web of EEG electrodes covered Anton Bilton’s scalp like a jeweled headdress. The machine would map his brain activity while the potent psychedelic dimethyltryptamine, commonly known as DMT, coursed through an IV drip and into his bloodstream. With some trepidation, he waited to be plunged into an otherworldly realm that was familiar, given his many years of psychedelic experience, and yet, as was inevitably the case with every DMT trip, completely new. ‘I didn’t know when they were going to turn it on,’ he says. ‘It was eight minutes of having your head in a guillotine, waiting for it to fucking drop.’ Then, like a rocket ripping out of Earth’s atmosphere, he arrived. And he knew he was being watched—not only by the humans back in the hospital room but also by a panoply of alien beings within the DMT realm itself. The peak of Bilton’s trip lasted about half an hour—considerably longer than a typical DMT experience. (Vaping, the most common mode of ingestion, produces peak effects lasting 10 to 15 minutes.) It was 2022, and he was one of 11 volunteers in the world’s first clinical study with ‘extended DMT,’ nicknamed DMTx, at Imperial College London. The idea had been suggested six years earlier in a paper by neurobiologist Andrew Gallimore and psychiatrist Rick Strassman, which argued that a technology called target-controlled intravenous infusion, originally developed to maintain steady levels of anesthesia during surgery, could be repurposed to prolong the DMT state. For Gallimore, one of the goals behind DMTx is to study an especially strange aspect of the DMT experience: perceived encounters with nonhuman, seemingly superintelligent entities. On March 18, he and a team of experts will launch a new psychedelic retreat center-slash-research facility on the tiny Caribbean island of Bequia aimed in part at establishing sustained, two-way communication with these beings. A ‘SETI for the mind,’ Gallimore calls it, referring to the Search for Extra Terrestrial Intelligence. Called Eleusis, the facility is named after an ancient Greek city that once attracted spiritual pilgrims for the ritual consumption of what some experts believe was a psychedelic potion. DMT is currently a Schedule 1 drug in the US, the federal government’s most tightly controlled category, but it can be administered legally in Bequia by licensed care providers. Eleusis’ research wing will be overseen by Noonautics, a nonprofit headed by Gallimore which ‘explores the edges of human understanding,’ according to its website, while the therapeutic side will be managed by Charles Patti and Christina Thomas, a couple who also co-own a ketamine clinic in Florida. (While the therapeutic potential of DMT hasn’t been as rigorously studied as that of some other psychedelics, it has shown promise for the treatment of alcohol use disorder and major depressive disorder.) DMTx sessions will be available to Eleusis guests (the resort is expecting to host 30 this month) under the supervision of medical experts, and alongside a plethora of new-agey offerings like breathwork and sound healing. All applicants will be prescreened to exclude anyone with ‘clear contraindications such as certain cardiovascular conditions, unmanaged psychiatric disorders, or medication conflicts,’ says Thomas. The Eleusis experience—starting with a four-day package costing $9,500 and including two DMTx sessions, lodging, and food—is promoted as a more personalizable and manageable alternative to ayahuasca, which in addition to lasting several hours can also be a physical ordeal and, like any psychedelic, sometimes end up in a terrifying trip. In the Amazon, where some experts believe ayahuasca has been used by indigenous peoples for millennia, the physical and psychological discomforts caused by the potion are viewed as important components of

  • A Modest Proposal Concerning AI Hallucinations

    A Modest Proposal Concerning AI Hallucinations

    We added a new site to our blogroll recently – ‘AI Hallucination Cases,’ which describes itself as: This database tracks legal decisions in cases where generative AI produced hallucinated content – typically fake citations, but also other types of AI-generated arguments. . . . While seeking to be exhaustive (972 cases identified so far), it is a work in progress and will expand as new examples emerge. . . . Of the (as of this writing) 972 judicial decisions the site identifies, over two-thirds (676) are from the United States. Not surprisingly, more than half of the cases involve pro se litigants, but in almost 400 (391) of the listed decisions the perpetrators of AI hallucinations were actual lawyers, who, by definition, are supposed to know better. Unfortunately, the problem does not seem to be getting any better. The number of judicial opinions involving actual lawyers (non-pro se) accused of passing off AI hallucinations in filed documents for the incomplete month of February 2026 (it’s 2/23/26 as we write this post) is thirty-three (31 of which were in the USA), which is significantly more than a decision a day. For January 2026, the number is thirty-six (25 of which were in the USA). For December 2025, the number was fifty-one (36 of which were in the USA). We didn’t do any detailed counts before then, but anyone who wants to can go to the site and verify our numbers. A lawyer’s use of AI hallucinated anything in a filed document is unethical. It’s a fraud on the court and opposing parties. It’s lack of candor with the court. It’s unprofessional since at minimum the presence of such hallucinations demonstrates incompetent use of technology. See ABA Model Rules of Prof. Conduct, Rule 1.1. In federal court, it’s an open-and-shut violation of Fed. R. Civ. P. 11 for not reading what was cited in signed, filed papers. The Fifth Circuit, in a recent published opinion, held that ‘submitting a brief riddled with fabricated quotations’ is conduct ‘unbecoming a member of the bar’ that violates F.R.A.P. 46(c). Fletcher v. Experian Information Solutions, Inc., ___ F.4th ___, 2026 WL 456842, at *2, 6 (5th Cir. Feb. 18, 2026) (citing, inter alia, the same website we mentioned above). Moreover, it’s stupid as hell. There’s nothing easier for the other side to identify than legal citations and quotations that don’t in fact exist. Can any lawyer reasonably believe that the other side isn’t going to review the cases that s/he cites? There are even AI programs available to detect AI hallucinations, including in legal filings. Once hallucinations are detected, the party responsible is essentially caught red-handed with virtually no defense. Then the perpetrator ends up like the attorney in Fletcher, or in this recent prescription medical product liability-adjacent decision: Plaintiff has submitted dozens of inaccurate factual and legal citations across at least five different filings. Regardless of any use of generative artificial intelligence, erroneous citations suggest Plaintiff’s counsel submitted these filings without ‘conduct[ing] a reasonable inquiry into the facts and the law’ as required by Rule 11(b). When confronted with similar situations, courts have ordered the filing attorneys to show cause why sanctions or discipline should not issue. The Court finds such an order is appropriate here. Ledoux v. Outliers, Inc., 2026 WL 291023, at *3 (W.D. Wash. Feb. 4, 2026) (citations and quotation marks omitted). Something needs to be done about this. We had hoped that embarrassment stemming from the broad public disclosure of the initial instances of attorneys found responsible for AI hallucinations would have been enough of a deterrent. But our hope in