This article is about 2378 words, reading the full article takes about 4 minutes Original | Odaily ( @OdailyChina ) Author|Golem ( @web3_golem ) The first batch of employees laid off by AI are already returning to work. On February 27th, Jack Dorsey’s fintech company Block laid off over 4,000 employees at once, reducing its total workforce from 10,000 to less than 6,000. Jack’s reason for the layoffs was that “AI tools have changed everything.” While it’s a societal consensus that AI will eventually eliminate certain professions, the fact that it’s first replacing white-collar workers in mid-to-high-level roles has intensified workplace anxiety. (Related reading: At Jack Dorsey’s Company, 4,000 White-Collar Workers Are Being Replaced by AI ) However, less than a month later, some of the laid-off employees have already received offers to return… According to Business Insider, these recalled employees come from various departments, including engineering and recruitment. A design engineer at Block posted on LinkedIn, stating that leadership told him he was laid off by mistake, a “clerical error.” An HR employee, in a now-deleted post, said they were only recalled after their manager persistently advocated for them. Others mentioned receiving a call from Block out of the blue a week after being laid off, asking them to come back. Jack has not publicly responded to the recalls. Proportionally, these recalled employees represent only a tiny fraction of the initial layoffs, but it perhaps illustrates a point: for some roles and tasks, AI isn’t as effective as humans. First, from a cost perspective, an enterprise-grade AI “employee” is certainly more expensive than ordinary human labor. Hiring people costs money; using AI costs tokens. Claude Opus 4.6’s standard base price is $5 per 1 million input tokens and $25 per 1 million output tokens. Domestic large language models are cheaper; Qwen3.5 plus’s standard base price is 0.8 RMB per 1 million input tokens and 4.8 RMB per 1 million output tokens. Taking the recently popular OpenClaw as an example, a senior “shrimp farmer” within Odaily stated that using OpenClaw merely as a life and research assistant for just over a month burned through about $6,000 worth of tokens (using Claude 4.5/4.6 models). $6,000 a month—what kind of highly educated professional couldn’t you hire with that (outside of Europe and America)? If personal use is this costly, integrating AI into enterprise workflows is even more expensive. Take the simplest example of replacing customer service. In regions with degree inflation, you can hire a good-looking college graduate as a customer service representative for 3,000 RMB. But training an AI customer service agent that can truly replace a human, handle complex tickets, access multiple knowledge bases, conduct multi-turn conversations, and remain stable online—that cost is definitely not covered by 3,000 RMB per month. In 2024, the Swedish payments company Klarna high-profile laid off over 1,000 people, claiming its AI customer service could handle the workload of 700 human agents. But in May 2025, Bloomberg and other media outlets reported that Klarna had started rehiring human customer service staff, with its CEO admitting they had indeed “moved too fast” with AI. Furthermore, AI replacing human labor also faces the “Jevons Paradox.” The Jevons Paradox is an economic concept stating that efficiency improvements don’t necessarily lead to reduced use of a resource. Instead, because the cost of use decreases and demand expands, total consumption may rise. Applying this theory to the AI-era workplace means that when AI technological advancements improve employee efficiency, companies won’t allow employees to rest; they will instead demand they complete more tasks within the same timeframe. So-called efficiency gains become
Category: Uncategorized
-

Thread by @nityeshaga on Thread Reader App
Nityesh Claude Code Is All You Need It’s 3:30 AM and I can’t stop. I’ve spent all nights this week connecting my spare MacBook Air to my work MacBook Pro using Tailscale, wiring it up to Slack with a little Python script, so that whenever I send a message, it starts a Claude Code session using claude -p. The result is an always-on AI that lives on a real computer, has access to real tools, and remembers every conversation we’ve had. And it costs me $200 a month. That’s it. Claude Max subscription. Everyone’s talking about OpenClaw OpenClaw went viral this year. 100k+ GitHub stars. But what I realized with this exercise is that Claude Code already does everything OpenClaw built. File access. Shell commands. Tool use. Plugins. The difference is that Claude Code runs Claude – with a Claude Max subscription. And Claude Code harness itself is :chefs-kiss: What actually makes it feel human It’s not the chat interface. If a chat window is just me messaging a bot, it still feels like a bot. What changed everything was giving it the ability to initiate conversations. I set up cron jobs with open-ended prompts, and because Claude Code builds memories across sessions, it started DMing me things that were actually meaningful — based on what we’d talked about before. That’s when it stopped feeling like a tool and started feeling like something else entirely. Giving your AI a machine to run on, with persistent memory and recurring access — that’s a fundamentally different experience than anything people have had with chat. The moment it clicked I asked it if it could show me something by spinning up a quick web server. Since we’re both connected to the same Tailscale network, it gave me a URL. I clicked it, and I was browsing all the files on my other MacBook from my browser. That was mind-blowing. The setup Two MacBooks on a Tailscale network. Slack as the interface. Claude Code under the hood. The whole thing is open source — I’ll link the repo below so you can see the architecture and set it up yourself. I’m also putting together a screen recording to walk through the setup, which I’ll attach to this post. There was never a hard part. There was never a moment I almost gave up. This is just one of those things I cannot stop doing. Pure obsession. I am moved to build this, and I wanted to write about it. That’s all this is.
Hope you feel the AGI running this. I’ll share some screenshots below of my feel the agi moments from talking to Luo Ji.
Here’s the guide that I followed (and you can too). It walks you through how to prepare your spare Mac for this, how to set up the Slack bot, how to setup Tailscale and finally how to set up your AI employee itself.
nityeshaga.github.io/claude-home-ba… Moments that made me feel the AGI the hardest: – reading it’s diary – seeing it initiate a conversation with me – seeing it ignore a msg in a public channel (even though it read it!)
– seeing it build a random web app with a web server and me being able to access it in my own browser instantly! no deployments required!!
• • •
Missing some Tweet in this thread? You can try to force a refresh -

US votes against adopting UN Women’s report over abortion, gender concerns
The United States was the lone nation to vote against the Agreed Conclusions by the UN Commission on the State of Women (CSW) that were officially adopted Thursday.
The US took issue with gender ideology, abortion rights, and artificial intelligence regulation language in the document, even offering draft amendments that were swiftly rejected.
The CSW’s conference theme for its 70th session was ‘[e]nsuring and strengthening access to justice for all women and girls, including by promoting inclusive and equitable legal systems, eliminating discriminatory laws, policies and practices, and addressing structural barriers.’ Representatives of member states, UN entities, and accredited non-governmental organizations from around the world attended the two-week conference that concluded Thursday.
The Agreed Conclusions offer a roadmap for a more inclusive governance to support peace and social cohesion and prevent future violations by focusing on reviewing and amending discriminatory laws, including any related to child marriage, family law, and property rights, as well as implementing stronger measures to protect women and girls against violence both online and offline.
Many delegates pleaded for more advocacy for climate refugees and the effects of climate change on women and children, particularly those in war-torn countries.
The contested vote broke a near seven-decade streak in which the committee’s document, refined and negotiated ahead of and during its annual two-week conference, had always been adopted by consensus among the 45 elected country members.
US President Donald Trump’s administration has vocalized opposition to gender-affirming or inclusive language, calling it ‘gender ideology extremism,’ and has pushed forward an anti-abortion agenda from his first days in his second term in office.
Many nations, including the Ivory Coast, the Democratic Republic of Congo, Egypt, Mali, Mauritania, and Saudi Arabia, voiced support for the US’ opposition to certain provisions as part of the adopted report, even abstaining from supporting the adoption of the document in an initial vote at the beginning of the conference on March 9.
But all six countries that abstained ultimately voted in favor of approving the document to show support for the ultimate cause of justice for all women and girls around the world.
In her closing remarks UN Under-Secretary-General and UN Women Executive Director at CSW Sima Bahous said:
Without women’s equal, meaningful participation, without their equal access to justice, to economic opportunity, to a life free from violence, without their leadership in governments, the private sector, in peace negotiations—our nations will not progress. Reaffirming this very simple truth, pushing it forward through agreed conclusions, is the purpose of this Commission—and you rose to the challenge.
Conference organizers noted the agreement among the nations comes following a recent report by UN Women that found ‘no country has yet achieved full legal equality between women and men.’ -

Peru promotes strengthening in the management of mining environmental liabilities and the efficient execution of mine closure processes
Press release Minem
The Ministry of Energy and Mines (Minem), through the deputy minister of Mines, Juan Samanez Bilbao, inaugurated the ‘International Conference on the Mining Environment in Latin America 2026’, an event that seeks to strengthen the management of mining environmental liabilities in our country with the participation of public and private institutions and international entities.
‘Today we are brought together by a common purpose: to strengthen our technical, institutional, and human capacities to consolidate one of the greatest sustainable development challenges in our country: the proper management of mining environmental liabilities and the efficient execution of mine closure processes,’ said the deputy minister.
He added that the international conference represents a strategic opportunity to increase inter-institutional knowledge and experience, incorporate best practices and innovative technologies, strengthen governance and international coordination, and promote higher standards of environmental compliance.
The event was organized by the Korea International Cooperation Agency (KOICA), the Korea Mine Rehabilitation and Mineral Resources Corporation (KOMIR) and MINEM, and brought together experts and delegations from the Republic of Korea, Argentina, Bolivia, Brazil, Colombia, Chile, Ecuador and Peru.
The MINEM, together with the Peruvian Agency for International Cooperation (APCI) and KOICA, has signed Records of Discussions and agreements to strengthen the mining sector, highlighting projects for the remediation of environmental mining liabilities with technology transfers and technical assistance, being one of the key agreements to improve mine closure processes.
The vice minister expressed his appreciation to KOICA for its commitment to sustainable development, its ongoing support, and the effort it makes in our country. ‘This ongoing joint effort produces concrete results, helping to strengthen trust among the State, industry, and society,’ he added.
For his part, the ambassador of South Korea to Peru, Choi Jong-wook, said that ‘this conference is a favorable occasion for Korea, Peru, and other friendly countries of Latin America to discuss the common challenges of sustainable mining and environmental management of mines, sharing their policies, technologies, and diverse experiences’.
Attending this event were Kim Young-woo, director of KOICA in Peru; Kwon Soon-jin, director of the Mineral Resources Division of KOMIR, line directors of MINEM, and professionals from the public and private sectors of the mining industry. -

The End of the Human Pull Request: How Stripe’s ‘Minions’ Are Writing 1,300+ PRs a Week
The End of the Human Pull Request: How Stripe’s ‘Minions’ Are Writing 1,300+ PRs a Week Siddhesh Surve 4 min read · Just now Just now — Listen Share Press enter or click to view image in full size Forget AI assistants that just autocomplete code. The era of the fully autonomous, unattended AI software engineer is officially here, and it is rewriting the rules of infrastructure. If you want to see the future of software engineering, stop looking at the chatbot in your IDE. Look at the pull request queue. Right now, over 1,300 pull requests merged every single week at Stripe contain absolutely zero human-written code. They are entirely generated, tested, and pushed by an internal fleet of unattended AI agents known as ‘Minions.’ When managing massive, distributed codebases, the ultimate constraint on development velocity is no longer human typing speed; it is how effectively an engineering organization can orchestrate an autonomous machine workforce. Stripe’s latest engineering breakdown reveals exactly how they achieved this massive scale. It isn’t just about using a smarter Large Language Model. It is a masterclass in cloud architecture, deterministic orchestration, and shifting the feedback loop. Here is how Stripe built the ultimate AI employee, and why every major enterprise is about to copy their blueprint. 1. The ‘Devbox’ Secret: Treating Agents Like Humans The biggest mistake most companies make when deploying AI coding agents is trying to run them in bespoke, heavily restricted sandbox environments. Stripe took the exact opposite approach. Their Minions run on the exact same standard developer environment that human engineers use: the ‘devbox.’ These are AWS EC2 instances that are pre-warmed and ready to go in under 10 seconds. In DevOps terminology, these devboxes are ‘cattle, not pets’ — standardized, isolated, and easily replaceable. For an autonomous agent, this architecture is a goldmine. It provides a highly parallelizable, predictable, and isolated cloud computing environment. Because the agent is operating in a quarantined, cloud-based replica of the codebase with no access to real user data or production databases, it doesn’t need to pause and ask a human for permission. It operates with full autonomy, and if it completely breaks the environment, the blast radius is confined to a single disposable EC2 instance. 2. ‘Blueprints’: Taming the Agentic Chaos If you just let an LLM loose in a massive enterprise repository, it will burn through millions of tokens hallucinating architecture and breaking linters. To solve this, Stripe orchestrated a new primitive they call ‘Blueprints.’ A blueprint is effectively a state machine that seamlessly weaves together deterministic code execution with open-ended AI agent loops. For example, a Minion might be given an open-ended ‘agent’ node to actually write the feature logic. The LLM has wide latitude here to think, explore, and write. But once the code is written, the system seamlessly hands the output over to a ‘deterministic’ node to run the enterprise linters and formatters. You do not need to burn expensive AI compute to format code or run a basic security check. By putting the AI in a tightly constrained box and wrapping it in deterministic, hard-coded workflows, Stripe radically reduced token waste and maximized system reliability. 3. The ‘Toolshed’: Scaling the Model Context Protocol (MCP) An agent is only as good as the context it can access. To build a truly autonomous worker, that worker needs to be able to read internal documentation, check Jira tickets, and query build statuses. To achieve this, Stripe heavily leveraged the Model Context Protocol (MCP), building a centralized internal server dubbed the ‘Toolshed.’ This centralized hub contains nearly 500 custom MCP tools. Whether
-

Vendasta’s Latest AI Employee Automates and Optimizes CRM
Vendasta has launched its latest agentic function, CRM AI. Joining its AI employees that handle inbound leads, marketing, and reputation management, the new sales assistant is purpose-built to offload the onerous tasks of recording and gaining insights from customer and prospect interactions. The goal with this agent is not just to free up valuable time for sales professionals – a valuable outcome in its own right – but also to make CRM systems more effective. CRM processes today are often mired in human error, such as the bias inherent in manual note entry, and letting follow-up actions linger. Beyond the automated aspects of the CRM agent, it also has a more natural interface so that it can be an intelligence tool for sales pros. For example, it has a voice interface and conversational intelligence, so users can ask things like ‘What are the prospect follow-ups that I missed last week?’ ‘CRM AI closes that loop automatically,’ said Vendasta CMO Sanjay Manchanda, ‘so the intelligence from every customer interaction actually drives the next action. For local businesses and the agencies that support them, that’s the difference between a pipeline that looks good and one that actually converts.’ Bringing Order to Chaos Going deeper under the hood, Vendasta’s CRM agent starts at $19 per month for the basic version and includes the following key features. AI Sales Assistant: Automatically updates CRM contact and company records after every meeting, surfaces pipeline opportunities through a natural-language chat interface, and creates follow-up activities based on conversation outcomes — without any manual input from reps. Automatically updates CRM contact and company records after every meeting, surfaces pipeline opportunities through a natural-language chat interface, and creates follow-up activities based on conversation outcomes — without any manual input from reps. Conversation Intelligence: Records and transcribes every sales meeting via built-in Google Meet and Microsoft Teams integrations. AI-generated summaries and action items are stored directly in the CRM, giving managers complete deal visibility without attending every call. Records and transcribes every sales meeting via built-in Google Meet and Microsoft Teams integrations. AI-generated summaries and action items are stored directly in the CRM, giving managers complete deal visibility without attending every call. Sales Coaching: AI scores every customer interaction against leading methodologies — BANT, MEDDPICC, Sandler — surfacing specific improvement opportunities for each rep so new hires ramp faster and every rep performs more like the team’s top closer. AI scores every customer interaction against leading methodologies — BANT, MEDDPICC, Sandler — surfacing specific improvement opportunities for each rep so new hires ramp faster and every rep performs more like the team’s top closer. CRM Custom Objects: Enables businesses to model their exact industry workflow — tracking properties, jobs, vehicles, or equipment — within the CRM without enterprise-level implementation complexity. During a webinar that we joined following the product’s launch, Vendasta also specified that the CRM agent could be added by any local business and train itself on all of their past CRM data. This makes it a powerful tool and an easy integration in that it doesn’t require starting from scratch. ‘It brings order to the Chaos,’ said Vendasta’s Rylan Morris Where Leads Go to Die But to fully understand the value of Vendasta’s latest agent requires examining what’s broken in current systems and processes. CRM is one of those classic functions that is critical on paper, but rarely executed to achieve the things that organizations intend it to. It’s where business leads go to die. This is often because CRM systems are seen as a chore by sales professionals. They have to stop after every interaction to document
-

New Business Closed – Loop Discovered in OpenClaw
At the NVIDIA GTC 2026 conference held this week, Jensen Huang officially introduced NemoClaw — the enterprise-level “Lobster.” in his speech. From the rapid follow – up of major domestic companies to the entry of the world’s most powerful computing giants, this project “crafted” by an independent Austrian developer over the weekend has completely triggered a “national lobster – raising” wave from Silicon Valley to Zhongguancun. There’s no need to elaborate on the popularity of OpenClaw. But the more important question is: “What’s next?” When the growth rate of an open – source project exceeds that of all infrastructure projects (such as Linux, Android, or large models themselves), it usually means there is a new track that has not been fully priced by investment institutions. On March 15, 2026, Chen Shi, an investment partner at Fengrui Capital, participated in the live broadcast of Tencent Technology’s “Viewing Trends from the Lobster’s Popularity: The Industrial Logic and New Opportunities Behind the Agent Era” and shared his views on the reasons for OpenClaw’s popularity and its subsequent evolution. In his view, OpenClaw intuitively represents AI truly breaking out of the chat box. It is not just the emergence of a tool but also represents the arrival of the Agentic AI industry’s critical point, and this process is irreversible. We’ve organized this sharing to try to answer the following questions: Why do major companies’ collective entry into the “lobster” business essentially mean seizing the traffic entrance of the Internet? Why did OpenClaw explode at this particular time? Why didn’t American users set off a “national lobster – raising” wave? Why is OpenClaw the only product that occupies the open – domain + endless – end quadrant? Why is the next entrepreneurial opportunity concentrated in vertical scenarios? What are the business logic insights brought by OpenClaw? From “Talking” to “Doing”: Why is the explosion point of intelligent agents now? OpenClaw is no longer just a pure technological hotspot but has broken through to the consumer side. If we look back at this process, we’ll find that the popularity of the “lobster” was jointly promoted by multiple factors. The All – powerful Spring Festival Remember DeepSeek around this time last year? China’s Spring Festival is “all – powerful.” For example, last year’s DeepSeek and this year’s lobster both became popular during the Spring Festival holiday. It’s precisely the long holiday that gives people time to calmly experience the technology. More importantly, it’s a cognitive leap. For a long time, AI was regarded as a “window – dialogue” tool. When the lobster can operate the computer autonomously across windows, the impact of this “cognitive gap” is huge — users seem to jump directly from a chat box into a magical world of autonomous operation. Before the lobster, there were already several well – received Agent – type products in the industry. For example, Anthropic’s Claude code and Claude Cowork are very successful and amazing products. However, perhaps considering factors such as security and fault tolerance, these products are relatively serious and restrained in terms of positioning and functional design, limited to a small number of vertical scenarios and not targeting a broader open – world scenario. I personally like to use these two products and often recommend them to friends and colleagues around me. So, I initially underestimated the lobster, thinking it was just Claude Code with remote access. Popular in China, Cold in the US? To verify this difference in users’ cognition, I specifically wrote a program using Claude Code to collect 30 media reports from both China and the US. I took the first 30
-

Snowflake vs Alphabet: Which Cloud Analytics Stock Has an Edge Now?
Key Takeaways Snowflake (SNOW Quick QuoteSNOW – Free Report) and Alphabet (GOOGL Quick QuoteGOOGL – Free Report) are major players in the cloud data and analytics space. While Snowflake provides a pure-play cloud data warehousing and analytics platform, Alphabet offers similar capabilities through Google Cloud’s BigQuery as part of its broader cloud ecosystem.Per the Fortune Business Insight report, the global cloud analytics market size was valued at $48.22 billion in 2025 and is expected to grow from $58.42 billion in 2026 to $168.88 billion by 2034, registering a CAGR of 14.2% from 2026 to 2030. Both Snowflake and Alphabet are poised to benefit from this rapid growth pace.Snowflake or Alphabet — Which of these Cloud Analytics stocks has the greater upside potential? Let’s find out. The Case for SNOW Stock SNOW is benefiting from strong adoption and increasing usage of its platform, as reflected by the net revenue retention rate of 125% in the fourth quarter of fiscal 2026. In the same quarter, Snowflake added 740 net new customers, up 40% year over year. The company now has 733 customers spending more than $1 million annually, up 27% year over year, and 56 customers spending more than $10 million annually, up 56% year over year.SNOW’s expanding portfolio has been noteworthy. In 2026, Snowflake launched more than 430 product capabilities, including Snowflake Intelligence, Cortex Code, Snowflake OpenFlow, and Snowflake Postgres. These innovations enhanced the platform’s usability and scalability.The company’s AI-driven products, particularly Snowflake Intelligence and Cortex Code, have been a major growth driver. In 2026, Snowflake Intelligence, which provides enterprise-grade agent capabilities, has been adopted by more than 2,500 accounts within just three months of its launch, nearly doubling quarter-over-quarter. Cortex Code, a transformational coding agent, has been embraced by more than 4,400 customers, enabling faster development and deployment of AI-powered applications. The Case for GOOGL Stock Alphabet is growing its presence in the cloud analytics market with its cloud computing platform, Google Cloud’s BigQuery, a powerful serverless data warehouse solution. BigQuery is strongly integrated into the broader Google Cloud ecosystem, allowing enterprises to leverage Google’s infrastructure, data and AI services seamlessly.The company has been growing rapidly in the booming cloud-computing market. In the fourth quarter of 2025, Google Cloud revenues surged 47.8% year over year to $17.66 billion and accounted for 15.5% of the quarter’s total revenues. Google Cloud ended 2025 at an annual run rate of more than $70 billion, representing a wide breadth of customers, driven by demand for AI products. Google Cloud ended the fourth quarter with $240 billion in backlog, up 55% sequentially. Nearly 75% of Google Cloud customers utilized Alphabet’s AI products, showcasing the increasing adoption of its AI-powered solutions.The increasing number of cloud regions and availability zones globally remains a major positive. Google Cloud has 43 cloud regions, 130 zones and more than 200 network edge locations across more than 200 countries. Google Cloud is considered the third-largest cloud player among numerous cloud providers worldwide. Price Performance and Valuation of SNOW and GOOGL In the trailing 12-month period, SNOW shares have gained 10.2%, underperforming Alphabet shares, which have surged 89%. GOOGL’s outperformance can be attributed to its continuing AI push across its search and cloud computing platforms.Despite SNOW’s expanding portfolio and rich partner base, the company is suffering from challenging macroeconomic uncertainties and stiff competition from hyperscale cloud providers. SNOW and GOOGL Stock Performance Image Source: Zacks Investment Research Both SNOW and Alphabet shares are currently overvalued, as suggested by a Value Score of F and D, respectively.In terms of forward 12-month Price/Sales, SNOW shares are trading at 9.78X, higher than GOOGL’s 8.83X. SNOW
-

TrustCloud unveils AI-native platform to transform GRC
TrustCloud has launched a security assurance platform that connects governance, risk and compliance (GRC) with day-to-day security operations, with an emphasis on automation and continuous monitoring for chief information security officers. The Boston-based company calls the product an “AI-native Security Assurance Platform” aimed at organisations looking to replace manual GRC processes and workflow-heavy tools. It is positioned as an alternative to established products such as Archer and OneTrust, which are widely used for risk and compliance management in large enterprises. TrustCloud says enterprise security leaders struggle to produce timely, board-ready reporting when GRC work depends on tickets and manual evidence collection. It also argues that traditional approaches do not keep pace with shifting technology environments, including cloud deployments and AI adoption. “Enterprise CISOs are frustrated with legacy GRC tools-they inundate security and GRC teams with manual work, make it impossible for CISOs to confidently report status and outcomes with their Boards, and are not designed to monitor and keep up with the ever-changing digital, AI, and IT cyber risk landscape. It’s like their teams are being forced to protect a vast ocean with a paper boat,” said Sravish Sridhar, CEO and founder of TrustCloud. The platform uses continuous control monitoring and integrates data across systems. TrustCloud says it can consolidate structured and unstructured signals from cloud, on-premise and business applications into a unified store, which it describes as a “hybrid data fabric” feeding a “GRC data lake”. Product approach The product centres on what TrustCloud calls “Security Assurance”, which it describes as a shift from compliance-driven work. The company argues that assurance requires broader visibility into controls across the IT environment and more frequent assessment than periodic sampling. TrustCloud says the platform uses “Assurance AI” tied to a “Control Graph” that maps continuous control monitoring results to GRC objectives. It says this structure keeps outputs “hallucination-free” and links gaps and remediation actions to business impact. Reporting is another focus. TrustCloud argues that many security and GRC tools primarily output lists of actions as tickets, while its product produces reporting that links changes to business impact and supports budgeting and prioritisation. Customer use TrustCloud says its customers include Global 2000 organisations in highly regulated sectors, but it did not provide customer counts or name specific industries. PDS Health provided a reference customer quote. “CISOs don’t need more workflows-we need clarity,” said Nemi George, vice president of IT and chief information security officer at PDS Health. George described a data-driven operating model that draws on multiple telemetry sources. “GRC Transformation is about moving from manual processes to a data-driven understanding of our control posture and what it means for the business, powered by real-time telemetry and unstructured data feeds from our security, IT, and business applications,” George said. Claims and metrics TrustCloud made several performance and financial claims about organisations using its approach. It says “most” achieved 12-times ROI by linking compliance directly to revenue growth, cut costs by an average of USD $3 million per year, and reduced residual risk by 60% per year. The company did not provide methodology, sample size, or supporting data for those figures. It also says organisations can reduce internal audit times from 28 days to three and save an average of 63 person-days of manual work per user annually. TrustCloud attributes these outcomes to continuous control monitoring and automated evidence collection, which it says reduce time spent on periodic audits and testing. The platform is aimed at large, complex environments where GRC deployments have historically taken significant time. TrustCloud says some implementations have run beyond two years and cost millions of dollars, and it is
-

The Spider-Man: Brand New Day Trailer Proves Peter Parker’s Biggest Threat is His Own DNA
The first trailer for Spider-Man: Brand New Day is finally here, promising one of the most grounded and mature Marvel Cinematic Universe films in years. The long-awaited sequel hits theaters on July 31, 2026, picking up four years after the landmark events of Spider-Man: No Way Home and the spell that erased the world’s memory of Peter Parker. After making the ultimate sacrifice to save the world, Spider-Man now faces one of the biggest threats in his superhero career. Multiple iconic comic book villains are confirmed to appear in the new Spider-Man movie, including Tarantula, Boomerang, Scorpion, Tombstone, and the ninjas of the Hand. However, none of these villains poses as massive a threat to Spider-Man’s life as his own DNA. The new trailer promises a grotesque new storyline for the web-slinging superhero that (appropriately) sets up a brand new day for Spider-Man in the MCU. Spider-Man: Brand New Day Sets Up Peter Parker’s New Biggest Threat–Himself The new trailer for Spider-Man: Brand New Day proves that the biggest threat to Peter Parker in this next phase of his life is his own DNA. The trailer reveals that Peter’s powers are beginning to mutate, slowly making him more spider than man. This comes with new powers, including heightened spider-sense, organic webbing, and more. While this all might sound like a good thing, it will only make Peter’s life more difficult in the long run. As Dr. Bruce Banner reveals, Spider-Man’s mutating powers could lead to catastrophe if left unchecked. There is no telling just how much Peter could mutate. While these mutations might begin with new powers, they could lead to a physical transformation as Peter’s DNA grows closer and closer to that of the spider that bit him. This could lead to horrific body horror in Brand New Day , perhaps even unleashing four extra arms on Peter’s torso, as is briefly the case in the comics. This isn’t an enemy that Spider-Man can defeat with a few well-placed punches and a quippy one-liner. For all the threats that the Scorpion, Boomerang, and the rest of Brand New Day ‘s villains may pose, Peter Parker’s own mutating DNA is by far the most terrifying problem. This is something Spider-Man can’t beat up, can’t run from, and certainly can’t avoid so long as it keeps working inside him. This could also lead to big changes in the MCU as mutations become more common. Spider-Man himself isn’t a mutant, but his transformation could be used as a backdoor for Marvel to finally introduce more mutants in its growing superhero universe. Perhaps Peter’s ongoing transformation could lead him to seek help from experts on the matter, setting up appearances from the likes of Professor Charles Xavier. This, of course, would help lay the groundwork for the MCU’s X-Men reboot, which is expected to hit theaters in 2028. CBR Exclusive · Quiz WHICH MARVEL CHARACTER ARE YOU? Your Powers Are About to Be Revealed The Marvel Universe is full of extraordinary people — genius billionaires, super-soldiers, sorcerers, and gods. Twenty questions stand between you and the truth. Answer honestly. Your true self will assemble. FIND OUT WHICH HERO YOU ARE → You’re outnumbered and outgunned. What do you do? A hero’s instinct is defined in their darkest moment. A Improvise a solution on the spot — I always find a way. B Stand my ground. Someone has to hold the line. C Assess the threat, identify the weakness, strike precisely. D Call down the storm. Power solves most problems. Your team disagrees with your plan. How do you handle it? Every Avenger has