Anthropic is redefining how people interact with artificial intelligence by giving its Claude assistant the ability to control a user’s computer directly. Instead of sitting at a desk and manually guiding the AI, users can now grant Claude full system access and let it complete tasks independently.
With this new capability, Claude can operate a computer much like a human would. It can move the cursor, type using the keyboard, view the screen, open applications, browse the web, manage spreadsheets, and handle everyday digital tasks. The goal is simple: allow work to continue even when the user is away.
The company announced the feature on X, describing it as a step toward turning Claude into a truly independent digital worker rather than just a responsive chatbot. By combining system access with task automation, Anthropic is positioning Claude as a personal AI agent that can execute real work on a user’s behalf.
Anthropic researcher Alex Albert highlighted the shift in how people may soon work with AI. He wrote on X, ‘The future where I never have to open up my laptop to get work done is becoming real very fast.’
The future where I never have to open up my laptop to get work done is becoming real very fast https://t.co/MVhyasmN3A — Alex Albert (@alexalbert__) March 23, 2026
The feature builds on Anthropic’s recently introduced ‘Dispatch’ capability, which already allows users to control Claude remotely via mobile devices. Together, these tools enable a workflow where instructions can be sent from anywhere while Claude carries out tasks on a connected computer.
Explaining how the system functions, Anthropic’s Felix Rieseberg said Claude first connects to approved applications such as Slack or Calendar. If additional tools are needed, the assistant requests user permission before proceeding. Once access is granted, Claude can navigate software environments and complete assignments without further supervision.
This setup effectively turns Claude into a remote digital assistant. Users can issue prompts, step away from their devices, and return later to find their work completed. The experience resembles having an AI employee handling routine computer-based responsibilities in the background.
Felix Rieseberg emphasized the extent of system integration, stating that Claude can now access a laptop’s mouse, keyboard, and screen.
Today, we’re releasing a feature that allows Claude to control your computer: Mouse, keyboard, and screen, giving it the ability to use any app.
I believe this is especially useful if used with Dispatch, which allows you to remotely control Claude on your computer while you’re… pic.twitter.com/tthl6vpID2 — Felix Rieseberg (@felixrieseberg) March 23, 2026
At present, the capability is being offered as a limited research preview. Access is restricted to paid users of Claude Cowork and Claude Code. Additionally, the feature currently supports only macOS devices.
To use it, both the Claude Desktop application and the Claude mobile app must be updated and paired. This ensures secure connectivity and full remote functionality.
Anthropic’s move reflects a larger industry push toward agentic AI systems that act autonomously rather than waiting for step-by-step human input. Automation, remote task execution, and intelligent workflow management are becoming central to next-generation AI products.
Competitive pressure is also mounting. AI agents similar to OpenClaw are gaining traction, and major technology companies are accelerating development in this space. Nvidia has introduced its own OpenClaw-style system called NemoClaw, while Meta and OpenAI are investing heavily in agent-based AI platforms.
By enabling direct computer control, Anthropic is betting that users want AI systems that do more than generate text — they want assistants that take action.
Tag: “AI employee”
-

Claude’s New Computer Control Feature Could Let AI Work While You’re Away
-
Microsoft Wants to Replace You With an AI Employee Named ‘Friday’ — And It’s Not Kidding
Microsoft just made its boldest move yet in the race to embed artificial intelligence into every corner of enterprise work. The company announced that its Copilot assistant will soon operate as a persistent AI coworker — one that has a name, a memory, and the ability to act on its own without waiting for human instructions. They’re calling it Friday. The name is a nod to the AI assistant from Marvel’s Iron Man franchise, and the ambition matches…
-

SoundHound AI stock advances on Peet’s Coffee nationwide rollout amid AI retail push
SoundHound AI (ISIN: US83614P1030) expands its voice AI Employee Assist to all Peet’s Coffee stores, testing real-world retail adoption. This chainwide deployment on Nasdaq highlights execution in conversational AI, drawing investor focus as the stock trades at $6.55 USD. DACH investors eye growth potential in Europe’s AI market. SoundHound AI has launched its voice-powered Employee Assist agent across every Peet’s Coffee store in the US, marking a shift from pilots to full-scale retail deployment. This rollout, announced recently, integrates hands-free AI for baristas to access real-time info on orders, training, and workflows. For the Nasdaq-listed stock (SOUN), last seen at $6.55 USD, it underscores traction in conversational AI amid ongoing losses and leadership changes. As of: 23.03.2026 By Dr. Elena Voss, Senior AI Markets Analyst – Tracking voice AI deployments from Silicon Valley to European enterprise adoption, where scalable retail use cases could redefine efficiency for DACH firms. From Pilot to Nationwide: Peet’s Rollout Details The deployment covers all Peet’s locations, building on successful pilots. SoundHound’s ‘BaristAI’ enables staff to query inventory or recipes conversationally without screens. This edge AI runs locally, minimizing latency in busy stores. Peet’s, a premium coffee chain, chose SoundHound for its agentic capabilities over generic chatbots. Early feedback points to faster service and reduced training time. Investors see this as proof of scalability beyond automotive and IoT roots. For SoundHound AI stock on Nasdaq, the news arrives as shares sit at $6.55 USD, down 33.6% over the past year but up 211.9% in three years. Year-to-date decline of 38.2% reflects broader AI sentiment swings. Strategic Timing Amid Executive Shifts CEO James Hom steps in as interim CFO during a transition, adding urgency to demonstrate revenue ramps. The Peet’s expansion provides a live case study for enterprise sales. It coincides with Experis naming SoundHound exclusive AI partner and edge AI demos at NVIDIA GTC. These moves position SoundHound against giants like Alphabet and Amazon in voice tech. Retail focus diversifies from restaurant orders, where it powers drive-thrus. Management emphasizes agentic AI—systems that act autonomously. Why the Market Cares Now Conversational AI hype meets reality in retail ops. Peet’s validates SoundHound’s tech at scale, critical for investor confidence. With revenue growth forecast but profitability years away, execution trumps promises. Stock volatility—3-year gains versus recent drops—ties to deployment milestones. Broader AI sector scrutiny amplifies this: can niche players like SoundHound secure foothold? Partnerships signal yes, but numbers will tell. Edge AI focus differentiates, running on-device for privacy and speed. NVIDIA tie-ins boost credibility in hyperscaler ecosystems. Official source Find the latest company information on the official website of SoundHound AI. Visit the official company website Investor Relevance for DACH Markets German-speaking investors should note SoundHound’s European footprint, including Germany, France, and beyond. Voice AI fits DACH’s automation push in retail and hospitality. Firms like REWE or Migros could mirror Peet’s for efficiency. EU AI Act compliance gives edge-deployed solutions advantage over cloud-heavy rivals. DACH funds favor scalable SaaS with recurring revenue—SoundHound’s model aligns. Exposure via Nasdaq offers diversification from Xetra heavies. With AI capex rising in Europe, SoundHound’s retail proof could attract partnerships. Watch for Q1 earnings to gauge Peet’s impact on bookings. Risks and Execution Challenges SoundHound remains unprofitable, with losses forecast to persist. Scaling Peet’s-like deals risks cost overruns if adoption lags. Competition from Big Tech looms large. CFO transition raises reporting flags. Revenue growth hinges on repeat contracts; pilot-to-scale jumps often falter. Cash burn demands discipline amid $6.55 USD Nasdaq levels. Macro slowdowns hit retail IT budgets. Investors weigh if ‘agentic’ edge translates to margins before 2028. Further reading Further developments, updates, and context on
-

‘You’re Already Late!’ Why AI is No Longer Optional in March 202…
In this episode of the Industry Spotlight, joining host Sam D’Arc are Roman Spriggs, General Manager of All Things Automotive, and Ross Tinkham, VP of Automotive at Podium, to discuss how Roman replaced a potential human hire with an AI agent named Alex.
This “AI employee” helped the small-town independent dealer achieve record sales months in early 2026 by managing 24/7 customer inquiries and setting appointments overnight.
Tinkham emphasizes that successful integration requires a 30 to 60-day commitment to training the technology to match the dealership’s unique culture and voice.
Ultimately, the group warns that dealers who fail to adopt AI now risk being “silently fired” by consumers with modern expectations.
This episode of the Car Dealership Guy Podcast is brought to you by Podium.
Podium – The AI platform trusted by one in three dealerships. Podium helps dealers consolidate sales, service, messaging, and voice into one connected system that actually runs the work. If your AI isn’t driving real outcomes, it’s time to take a closer look @ https://www.podium.com/car-dealership-guy.
Check out Car Dealership Guy’s stuff:
For dealers:
CDG Circles ➤ https://cdgcircles.com/
Industry job board ➤ http://jobs.dealershipguy.com
Dealership recruiting ➤ http://www.cdgrecruiting.com
Fix your dealership’s social media ➤ http://www.trynomad.co
Request to be a podcast guest ➤ http://www.cdgguest.com
For industry vendors:
Advertise with Car Dealership Guy ➤ http://www.cdgpartner.com
Industry job board ➤ http://jobs.dealershipguy.com
Request to be a podcast guest ➤ http://www.cdgguest.com
Topics:
Your Sales Team Isn’t the Problem… This Is
I Replaced a Salesperson With AI… Here’s What Happened
This AI Turned a Dead Sales Floor Into Chaos
If You Don’t Have AI Yet… You’re Already Losing
You Won’t Be Able to Tell AI From Humans Soon
Most AI Companies Will Burn You
Buyers Can Now Shop 100 Dealers Instantly
Customers Now Expect Replies at 2 AM
CRM Tasks Are Killing Your Dealership
Car Dealership Guy Socials:
X ➤ x.com/GuyDealership
Instagram ➤ instagram.com/cardealershipguy/
TikTok ➤ tiktok.com/@guydealership
LinkedIn ➤ linkedin.com/company/cardealershipguy
Threads ➤ threads.net/@cardealershipguy
Facebook ➤ facebook.com/profile.php?id=100077402857683
Everything else ➤ dealershipguy.com -
The first batch of employees laid off by AI at major tech companies have already returned to work.
Author: Golem, Odaily Planet Daily The first batch of employees laid off by AI have already returned to work. On February 27th, Block, the fintech company founded by Jack Dorsey (founder of Twitter), laid off more than 4,000 employees, reducing its total workforce from 10,000 to less than 6,000. Dorsey cited “AI tools changing everything” as the reason for the layoffs. While it’s widely acknowledged that AI will eventually eliminate some jobs, the fact that it’s initially replacing mid- to high-level white-collar workers has exacerbated workplace anxiety. However, less than a month later, some of the laid-off employees had already received invitations to return to work… According to Business Insider, these rehired employees came from multiple departments, including engineering and recruitment. A design engineer at Block posted on LinkedIn that a management member told him he had been laid off in error due to a “clerical error” ; an HR person stated in a now-deleted post that he was rehired only after his manager repeatedly advocated for him ; and others claimed they received a call from Block a week after being laid off and were asked to return. Jack has not yet publicly responded to the rehiring of employees. Judging from the proportion, these rehired employees only account for a small portion of the employees who were initially laid off, but it may already illustrate the problem: for some positions and jobs, AI is not as effective as a human. From the perspective of usage costs, the cost of an enterprise-level AI employee is definitely higher than that of a regular human employee . Hiring people to do the work costs money, while hiring AI to do the work costs tokens. The standard base price for Claude Opus 4.6 is $5 per 1 million tokens for input and $25 per 1 million tokens for output. Larger domestic models are even cheaper. The standard base price for Qwen 3.5 plus is 0.8 yuan per 1 million tokens for input and 4.8 yuan per 1 million tokens for output. Take the recently popular OpenClaw as an example. A senior “shrimp farmer” within Odaily Planet Daily stated that he only used OpenClaw as a life and investment research assistant, and burned through approximately $6,000 in tokens in just over a month (he was using the Claude 4.5/4.6 model). $6,000 a month—what kind of highly educated person couldn’t afford that (excluding Europe and America)? If this is the case for personal use, the cost of integrating AI into enterprise operations is even higher. Take the simplest example of customer service replacement: in some areas with inflation in education levels, you can hire a good-looking college student as a customer service representative for 3,000 yuan. However, training an AI customer service representative who can truly replace human customer service, handle complex work orders, access multiple knowledge bases, conduct multi-round dialogues, and be stably online will definitely cost far more than 3,000 yuan per month. In 2024, Swedish payment company Klarna announced a high-profile layoff of over 1,000 employees, claiming that AI customer service could replace the workload of 700 customer service agents. However, in May 2025, Bloomberg and other media outlets reported that Klarna had begun hiring again for customer service, and its CEO even admitted that they had indeed “moved too fast” in AI. Furthermore, the replacement of human labor by AI also presents the “Jevens Paradox” . Jevons’ paradox is a concept in economics that states that increased efficiency does not necessarily lead to a decrease in the use of a particular resource. On the contrary, due to lower usage costs and
-
The First Wave of Tech Giants’ AI Layoffs Are Already Returning to Work
This article is about 2378 words, reading the full article takes about 4 minutes Original | Odaily ( @OdailyChina ) Author|Golem ( @web3_golem ) The first batch of employees laid off by AI are already returning to work. On February 27th, Jack Dorsey’s fintech company Block laid off over 4,000 employees at once, reducing its total workforce from 10,000 to less than 6,000. Jack’s reason for the layoffs was that “AI tools have changed everything.” While it’s a societal consensus that AI will eventually eliminate certain professions, the fact that it’s first replacing white-collar workers in mid-to-high-level roles has intensified workplace anxiety. (Related reading: At Jack Dorsey’s Company, 4,000 White-Collar Workers Are Being Replaced by AI ) However, less than a month later, some of the laid-off employees have already received offers to return… According to Business Insider, these recalled employees come from various departments, including engineering and recruitment. A design engineer at Block posted on LinkedIn, stating that leadership told him he was laid off by mistake, a “clerical error.” An HR employee, in a now-deleted post, said they were only recalled after their manager persistently advocated for them. Others mentioned receiving a call from Block out of the blue a week after being laid off, asking them to come back. Jack has not publicly responded to the recalls. Proportionally, these recalled employees represent only a tiny fraction of the initial layoffs, but it perhaps illustrates a point: for some roles and tasks, AI isn’t as effective as humans. First, from a cost perspective, an enterprise-grade AI “employee” is certainly more expensive than ordinary human labor. Hiring people costs money; using AI costs tokens. Claude Opus 4.6’s standard base price is $5 per 1 million input tokens and $25 per 1 million output tokens. Domestic large language models are cheaper; Qwen3.5 plus’s standard base price is 0.8 RMB per 1 million input tokens and 4.8 RMB per 1 million output tokens. Taking the recently popular OpenClaw as an example, a senior “shrimp farmer” within Odaily stated that using OpenClaw merely as a life and research assistant for just over a month burned through about $6,000 worth of tokens (using Claude 4.5/4.6 models). $6,000 a month—what kind of highly educated professional couldn’t you hire with that (outside of Europe and America)? If personal use is this costly, integrating AI into enterprise workflows is even more expensive. Take the simplest example of replacing customer service. In regions with degree inflation, you can hire a good-looking college graduate as a customer service representative for 3,000 RMB. But training an AI customer service agent that can truly replace a human, handle complex tickets, access multiple knowledge bases, conduct multi-turn conversations, and remain stable online—that cost is definitely not covered by 3,000 RMB per month. In 2024, the Swedish payments company Klarna high-profile laid off over 1,000 people, claiming its AI customer service could handle the workload of 700 human agents. But in May 2025, Bloomberg and other media outlets reported that Klarna had started rehiring human customer service staff, with its CEO admitting they had indeed “moved too fast” with AI. Furthermore, AI replacing human labor also faces the “Jevons Paradox.” The Jevons Paradox is an economic concept stating that efficiency improvements don’t necessarily lead to reduced use of a resource. Instead, because the cost of use decreases and demand expands, total consumption may rise. Applying this theory to the AI-era workplace means that when AI technological advancements improve employee efficiency, companies won’t allow employees to rest; they will instead demand they complete more tasks within the same timeframe. So-called efficiency gains become
-

Thread by @nityeshaga on Thread Reader App
Nityesh Claude Code Is All You Need It’s 3:30 AM and I can’t stop. I’ve spent all nights this week connecting my spare MacBook Air to my work MacBook Pro using Tailscale, wiring it up to Slack with a little Python script, so that whenever I send a message, it starts a Claude Code session using claude -p. The result is an always-on AI that lives on a real computer, has access to real tools, and remembers every conversation we’ve had. And it costs me $200 a month. That’s it. Claude Max subscription. Everyone’s talking about OpenClaw OpenClaw went viral this year. 100k+ GitHub stars. But what I realized with this exercise is that Claude Code already does everything OpenClaw built. File access. Shell commands. Tool use. Plugins. The difference is that Claude Code runs Claude – with a Claude Max subscription. And Claude Code harness itself is :chefs-kiss: What actually makes it feel human It’s not the chat interface. If a chat window is just me messaging a bot, it still feels like a bot. What changed everything was giving it the ability to initiate conversations. I set up cron jobs with open-ended prompts, and because Claude Code builds memories across sessions, it started DMing me things that were actually meaningful — based on what we’d talked about before. That’s when it stopped feeling like a tool and started feeling like something else entirely. Giving your AI a machine to run on, with persistent memory and recurring access — that’s a fundamentally different experience than anything people have had with chat. The moment it clicked I asked it if it could show me something by spinning up a quick web server. Since we’re both connected to the same Tailscale network, it gave me a URL. I clicked it, and I was browsing all the files on my other MacBook from my browser. That was mind-blowing. The setup Two MacBooks on a Tailscale network. Slack as the interface. Claude Code under the hood. The whole thing is open source — I’ll link the repo below so you can see the architecture and set it up yourself. I’m also putting together a screen recording to walk through the setup, which I’ll attach to this post. There was never a hard part. There was never a moment I almost gave up. This is just one of those things I cannot stop doing. Pure obsession. I am moved to build this, and I wanted to write about it. That’s all this is.
Hope you feel the AGI running this. I’ll share some screenshots below of my feel the agi moments from talking to Luo Ji.
Here’s the guide that I followed (and you can too). It walks you through how to prepare your spare Mac for this, how to set up the Slack bot, how to setup Tailscale and finally how to set up your AI employee itself.
nityeshaga.github.io/claude-home-ba… Moments that made me feel the AGI the hardest: – reading it’s diary – seeing it initiate a conversation with me – seeing it ignore a msg in a public channel (even though it read it!)
– seeing it build a random web app with a web server and me being able to access it in my own browser instantly! no deployments required!!
• • •
Missing some Tweet in this thread? You can try to force a refresh -

The End of the Human Pull Request: How Stripe’s ‘Minions’ Are Writing 1,300+ PRs a Week
The End of the Human Pull Request: How Stripe’s ‘Minions’ Are Writing 1,300+ PRs a Week Siddhesh Surve 4 min read · Just now Just now — Listen Share Press enter or click to view image in full size Forget AI assistants that just autocomplete code. The era of the fully autonomous, unattended AI software engineer is officially here, and it is rewriting the rules of infrastructure. If you want to see the future of software engineering, stop looking at the chatbot in your IDE. Look at the pull request queue. Right now, over 1,300 pull requests merged every single week at Stripe contain absolutely zero human-written code. They are entirely generated, tested, and pushed by an internal fleet of unattended AI agents known as ‘Minions.’ When managing massive, distributed codebases, the ultimate constraint on development velocity is no longer human typing speed; it is how effectively an engineering organization can orchestrate an autonomous machine workforce. Stripe’s latest engineering breakdown reveals exactly how they achieved this massive scale. It isn’t just about using a smarter Large Language Model. It is a masterclass in cloud architecture, deterministic orchestration, and shifting the feedback loop. Here is how Stripe built the ultimate AI employee, and why every major enterprise is about to copy their blueprint. 1. The ‘Devbox’ Secret: Treating Agents Like Humans The biggest mistake most companies make when deploying AI coding agents is trying to run them in bespoke, heavily restricted sandbox environments. Stripe took the exact opposite approach. Their Minions run on the exact same standard developer environment that human engineers use: the ‘devbox.’ These are AWS EC2 instances that are pre-warmed and ready to go in under 10 seconds. In DevOps terminology, these devboxes are ‘cattle, not pets’ — standardized, isolated, and easily replaceable. For an autonomous agent, this architecture is a goldmine. It provides a highly parallelizable, predictable, and isolated cloud computing environment. Because the agent is operating in a quarantined, cloud-based replica of the codebase with no access to real user data or production databases, it doesn’t need to pause and ask a human for permission. It operates with full autonomy, and if it completely breaks the environment, the blast radius is confined to a single disposable EC2 instance. 2. ‘Blueprints’: Taming the Agentic Chaos If you just let an LLM loose in a massive enterprise repository, it will burn through millions of tokens hallucinating architecture and breaking linters. To solve this, Stripe orchestrated a new primitive they call ‘Blueprints.’ A blueprint is effectively a state machine that seamlessly weaves together deterministic code execution with open-ended AI agent loops. For example, a Minion might be given an open-ended ‘agent’ node to actually write the feature logic. The LLM has wide latitude here to think, explore, and write. But once the code is written, the system seamlessly hands the output over to a ‘deterministic’ node to run the enterprise linters and formatters. You do not need to burn expensive AI compute to format code or run a basic security check. By putting the AI in a tightly constrained box and wrapping it in deterministic, hard-coded workflows, Stripe radically reduced token waste and maximized system reliability. 3. The ‘Toolshed’: Scaling the Model Context Protocol (MCP) An agent is only as good as the context it can access. To build a truly autonomous worker, that worker needs to be able to read internal documentation, check Jira tickets, and query build statuses. To achieve this, Stripe heavily leveraged the Model Context Protocol (MCP), building a centralized internal server dubbed the ‘Toolshed.’ This centralized hub contains nearly 500 custom MCP tools. Whether
-

Vendasta’s Latest AI Employee Automates and Optimizes CRM
Vendasta has launched its latest agentic function, CRM AI. Joining its AI employees that handle inbound leads, marketing, and reputation management, the new sales assistant is purpose-built to offload the onerous tasks of recording and gaining insights from customer and prospect interactions. The goal with this agent is not just to free up valuable time for sales professionals – a valuable outcome in its own right – but also to make CRM systems more effective. CRM processes today are often mired in human error, such as the bias inherent in manual note entry, and letting follow-up actions linger. Beyond the automated aspects of the CRM agent, it also has a more natural interface so that it can be an intelligence tool for sales pros. For example, it has a voice interface and conversational intelligence, so users can ask things like ‘What are the prospect follow-ups that I missed last week?’ ‘CRM AI closes that loop automatically,’ said Vendasta CMO Sanjay Manchanda, ‘so the intelligence from every customer interaction actually drives the next action. For local businesses and the agencies that support them, that’s the difference between a pipeline that looks good and one that actually converts.’ Bringing Order to Chaos Going deeper under the hood, Vendasta’s CRM agent starts at $19 per month for the basic version and includes the following key features. AI Sales Assistant: Automatically updates CRM contact and company records after every meeting, surfaces pipeline opportunities through a natural-language chat interface, and creates follow-up activities based on conversation outcomes — without any manual input from reps. Automatically updates CRM contact and company records after every meeting, surfaces pipeline opportunities through a natural-language chat interface, and creates follow-up activities based on conversation outcomes — without any manual input from reps. Conversation Intelligence: Records and transcribes every sales meeting via built-in Google Meet and Microsoft Teams integrations. AI-generated summaries and action items are stored directly in the CRM, giving managers complete deal visibility without attending every call. Records and transcribes every sales meeting via built-in Google Meet and Microsoft Teams integrations. AI-generated summaries and action items are stored directly in the CRM, giving managers complete deal visibility without attending every call. Sales Coaching: AI scores every customer interaction against leading methodologies — BANT, MEDDPICC, Sandler — surfacing specific improvement opportunities for each rep so new hires ramp faster and every rep performs more like the team’s top closer. AI scores every customer interaction against leading methodologies — BANT, MEDDPICC, Sandler — surfacing specific improvement opportunities for each rep so new hires ramp faster and every rep performs more like the team’s top closer. CRM Custom Objects: Enables businesses to model their exact industry workflow — tracking properties, jobs, vehicles, or equipment — within the CRM without enterprise-level implementation complexity. During a webinar that we joined following the product’s launch, Vendasta also specified that the CRM agent could be added by any local business and train itself on all of their past CRM data. This makes it a powerful tool and an easy integration in that it doesn’t require starting from scratch. ‘It brings order to the Chaos,’ said Vendasta’s Rylan Morris Where Leads Go to Die But to fully understand the value of Vendasta’s latest agent requires examining what’s broken in current systems and processes. CRM is one of those classic functions that is critical on paper, but rarely executed to achieve the things that organizations intend it to. It’s where business leads go to die. This is often because CRM systems are seen as a chore by sales professionals. They have to stop after every interaction to document
-

New Business Closed – Loop Discovered in OpenClaw
At the NVIDIA GTC 2026 conference held this week, Jensen Huang officially introduced NemoClaw — the enterprise-level “Lobster.” in his speech. From the rapid follow – up of major domestic companies to the entry of the world’s most powerful computing giants, this project “crafted” by an independent Austrian developer over the weekend has completely triggered a “national lobster – raising” wave from Silicon Valley to Zhongguancun. There’s no need to elaborate on the popularity of OpenClaw. But the more important question is: “What’s next?” When the growth rate of an open – source project exceeds that of all infrastructure projects (such as Linux, Android, or large models themselves), it usually means there is a new track that has not been fully priced by investment institutions. On March 15, 2026, Chen Shi, an investment partner at Fengrui Capital, participated in the live broadcast of Tencent Technology’s “Viewing Trends from the Lobster’s Popularity: The Industrial Logic and New Opportunities Behind the Agent Era” and shared his views on the reasons for OpenClaw’s popularity and its subsequent evolution. In his view, OpenClaw intuitively represents AI truly breaking out of the chat box. It is not just the emergence of a tool but also represents the arrival of the Agentic AI industry’s critical point, and this process is irreversible. We’ve organized this sharing to try to answer the following questions: Why do major companies’ collective entry into the “lobster” business essentially mean seizing the traffic entrance of the Internet? Why did OpenClaw explode at this particular time? Why didn’t American users set off a “national lobster – raising” wave? Why is OpenClaw the only product that occupies the open – domain + endless – end quadrant? Why is the next entrepreneurial opportunity concentrated in vertical scenarios? What are the business logic insights brought by OpenClaw? From “Talking” to “Doing”: Why is the explosion point of intelligent agents now? OpenClaw is no longer just a pure technological hotspot but has broken through to the consumer side. If we look back at this process, we’ll find that the popularity of the “lobster” was jointly promoted by multiple factors. The All – powerful Spring Festival Remember DeepSeek around this time last year? China’s Spring Festival is “all – powerful.” For example, last year’s DeepSeek and this year’s lobster both became popular during the Spring Festival holiday. It’s precisely the long holiday that gives people time to calmly experience the technology. More importantly, it’s a cognitive leap. For a long time, AI was regarded as a “window – dialogue” tool. When the lobster can operate the computer autonomously across windows, the impact of this “cognitive gap” is huge — users seem to jump directly from a chat box into a magical world of autonomous operation. Before the lobster, there were already several well – received Agent – type products in the industry. For example, Anthropic’s Claude code and Claude Cowork are very successful and amazing products. However, perhaps considering factors such as security and fault tolerance, these products are relatively serious and restrained in terms of positioning and functional design, limited to a small number of vertical scenarios and not targeting a broader open – world scenario. I personally like to use these two products and often recommend them to friends and colleagues around me. So, I initially underestimated the lobster, thinking it was just Claude Code with remote access. Popular in China, Cold in the US? To verify this difference in users’ cognition, I specifically wrote a program using Claude Code to collect 30 media reports from both China and the US. I took the first 30