CloudTalk

Category: Technology

  • Humans& Bets on AI Collaboration: The Next Frontier

    Humans& Bets on AI Collaboration: The Next Frontier

    The hum of servers filled the room, a constant thrum beneath the focused energy of the team. It was late October 2025, and the Humans& engineers were deep in the weeds, poring over thermal test results. A new generation of foundation models for collaboration, as they called it, was on the line.

    Founded by alumni from Anthropic, Meta, OpenAI, xAI, and Google DeepMind, Humans& is betting big that the next leap in AI isn’t just about bigger models, but better coordination. Their focus, unlike many in the current AI landscape, isn’t on chatbot technology. Instead, they’re building systems designed for collaboration. Think AI that can help teams work together, not just generate text.

    The core of their approach, according to sources familiar with the company, involves a shift in how AI models are trained and deployed. Instead of solely focusing on language generation, Humans& is building models capable of understanding and responding to complex, multi-agent interactions. This means the AI can, for example, coordinate tasks, manage projects, or even facilitate negotiations. This is a big departure from current models.

    “The market is definitely moving in this direction,” said analyst Sarah Chen of Deepwater Research, during a call earlier this week. “We’re seeing a push for AI that can handle more complex workflows, and Humans& is positioned to capitalize on that.” Chen estimates the market for collaborative AI tools could reach $10 billion by 2027.

    The team is working towards several milestones. The M100 model, slated for release in early 2026, focuses on basic task coordination. The M300, planned for 2027, will incorporate advanced features like real-time decision-making and dynamic resource allocation. That’s the plan, anyway.

    Meanwhile, the supply chain is a constant concern. Export controls and manufacturing capacity are major hurdles. The team is aware of the limitations. They’re dealing with the same chip constraints and manufacturing bottlenecks as everyone else. SMIC versus TSMC is a daily conversation, and the US domestic procurement policies add another layer of complexity.

    The challenge, as some see it, is proving the value of coordination. It’s a different metric than the current benchmarks of language models. But Humans& is confident. The company believes that by focusing on collaboration, they can unlock a new level of productivity and efficiency.

    It’s a long shot, maybe. But the engineers kept working, the servers kept humming. The future, in their view, is collaboration.

  • Quadric: On-Device AI Chips Revolutionize Computing

    Quadric: On-Device AI Chips Revolutionize Computing

    The hum of servers used to be the sound of AI. Now, it’s the quiet whir of a chip, nestled inside a device. At least, that’s the bet Quadric is making. The company, aiming to help companies and governments build programmable on-device AI chips, is riding the wave of a significant shift in the artificial intelligence landscape. The move away from cloud-based AI to on-device inference is gaining momentum, and Quadric seems well-positioned to capitalize.

    Earlier this week, during a call with investors, a Quadric spokesperson highlighted their focus on fast-changing models. This means the ability to run updated AI algorithms locally, without constantly pinging the cloud. It’s a critical advantage in fields like edge computing, robotics, and even national security, where latency and data privacy are paramount.

    The technical challenges are significant. On-device AI demands powerful, yet energy-efficient, processing. Traditional GPUs, designed for the cloud, often fall short. Quadric’s approach involves developing specialized chips. These chips are designed to handle the complex computations needed for AI models right on the device. This is a bit of a departure from the conventional wisdom of recent years.

    “The market is definitely moving in this direction,” said John Thompson, a senior analyst at Forrester, in a recent interview. “We’re seeing increased demand for low-latency, secure AI solutions, and on-device inference is a key enabler.” The analyst also noted a shift in procurement priorities in key markets, especially in light of export controls and domestic supply chain policies.

    Consider the details: Quadric’s roadmap includes the M100 and M300 chips, with projected releases in 2026 and 2027, respectively. The company is targeting a performance increase of up to 5x compared to existing solutions, as per internal projections. But the true test will be the real world, and how well these chips can handle the dynamic demands of AI models.

    Meanwhile, the supply chain remains a critical factor. The availability of advanced manufacturing processes, particularly those offered by TSMC, could be a bottleneck. The U.S. export rules and domestic procurement policies also play a significant role. It’s a complex equation, where innovation meets the realities of global politics and manufacturing capacity.

    Still, the shift towards on-device AI is clear. Quadric is among the companies poised to benefit. It’s a space that’s going to be interesting to watch as the year progresses.

  • Quadric: On-Device AI Chips Revolutionize Computing

    Quadric: On-Device AI Chips Revolutionize Computing

    The hum of servers, usually a constant drone, seemed to quiet slightly, or maybe that’s how the supply shock reads from here. Inside Quadric’s engineering lab, the team was running thermal tests on the new M300 chip, slated for release in early 2027, according to their roadmap. The goal: to enable AI processing directly on devices, bypassing the need for constant cloud connectivity.

    It’s a strategic pivot, as the industry begins to recognize the limitations of cloud-dependent AI. Quadric, founded with the aim of helping companies and governments, sees the potential in programmable on-device AI chips. They’re designed to run fast-changing models locally. This means quicker response times and enhanced data privacy, key selling points in an increasingly security-conscious world.

    “We’re seeing a significant shift,” said analyst Maria Chen from Forrester, during a recent industry briefing. “The demand for on-device inference is surging, and companies like Quadric are well-positioned to capitalize. We project the market to reach $15 billion by 2028.” That’s a bold number, considering the sector was still nascent just a few years ago. But the need is there: think of self-driving cars needing instant reactions, or edge devices in remote locations with limited bandwidth.

    The technical challenges are significant. Building these chips requires advanced manufacturing, and the global supply chain, still recovering from recent disruptions, adds another layer of complexity. Export controls also play a major role. Quadric, like many in the industry, has to navigate the complex web of US and international regulations. The company is likely looking at options for domestic procurement policies in China, which could influence their strategy.

    Earlier today, the team was reviewing the performance metrics for the M100, which is already in use. The focus now is on the M300, which promises a substantial performance leap. The engineers were huddled around monitors, analyzing the data. The atmosphere was focused, the air thick with anticipation. The M300 is expected to offer a 4x performance increase over the M100, according to internal projections.

    The shift to on-device AI is more than a technological evolution; it’s a strategic move. It gives companies and governments greater control over their data and operations. Quadric is, in a way, at the forefront of this transformation. Their success will depend on their ability to deliver on their promises, navigate the complex regulatory landscape, and, of course, stay ahead of the competition.

  • Tiger Global & Microsoft Exit PhonePe Ahead of IPO

    Tiger Global & Microsoft Exit PhonePe Ahead of IPO

    The numbers were coming in fast, screens flickering in the subdued light of the Bloomberg terminal room. It was January 22, 2026, and the news was breaking: Tiger Global and Microsoft were set to fully exit their positions in PhonePe, the digital payments firm backed by Walmart. The move, announced ahead of PhonePe’s initial public offering, sent a ripple through the market, or so it seemed.

    Walmart, however, wasn’t following suit. Instead, the retail giant planned to retain its majority stake, while also offloading up to 45.9 million shares. The shift in strategy was immediately apparent, and the air in the room felt thick with speculation. What did it mean? Did the exits signal a lack of faith, or a strategic realignment? Or something else entirely?

    The atmosphere was tense, the chatter on the conference call, muted. Analysts were already running the numbers, trying to make sense of the valuation implications. One expert, speaking from the Peterson Institute for International Economics, suggested the move could reflect a broader trend. “It’s about portfolio diversification, and maybe, just maybe, a reassessment of risk in the current climate,” she said, her voice a steady counterpoint to the rising tide of market noise.

    Tiger Global and Microsoft’s decision to fully exit, while Walmart held steady. It was a stark contrast.

    The financial mechanics were intricate, the details of the IPO still unfolding. But the core story was clear: major players were making significant moves. The market’s reaction, of course, was the key.

    The implications were vast, and the possible scenarios, numerous. A successful IPO would validate PhonePe’s growth trajectory, but it also opened the door to new risks. Tax implications, regulatory hurdles, and evolving consumer behavior—all were factors that would shape the company’s future.

    The analysts continued to tap at their spreadsheets, the data points flashing across their screens, the sound a low hum. It was a complex, evolving situation, and the final chapter, still unwritten.

    And it was clear, the story wasn’t over.

  • Tiger Global & Microsoft Exit PhonePe IPO: Market Shift

    Tiger Global & Microsoft Exit PhonePe IPO: Market Shift

    The news hit the wires on January 22, 2026, a Tuesday, and the trading floor felt… subdued. Or maybe it was just the usual mid-week quiet, the air conditioning humming a steady drone, analysts already tapping away at spreadsheets. Tiger Global and Microsoft were finally pulling out of PhonePe, the Walmart-backed digital payments firm, via its upcoming IPO. Not a complete surprise, but the scale of the exit was notable.

    Reports indicate that Tiger Global and Microsoft are offering their full stakes. Walmart, on the other hand, is retaining its controlling interest, though it’s also selling a chunk – up to 45.9 million shares. It’s a shift, a repositioning, the kind that always makes you wonder what the smart money sees that the rest of us don’t.

    Details are still emerging, but the implications are already echoing. The market’s initial reaction? Muted, as far as could be seen. A quick glance at the early trading indicators told the story. This isn’t necessarily a sign of trouble, of course — it could be a strategic move to capitalize on the IPO’s potential. Still, some analysts are cautioning against reading too much into the initial reaction, suggesting a wait-and-see approach. As one financial analyst from a well-known research firm, said, “These kinds of exits are complex, reflecting a blend of portfolio strategy, market timing, and potentially, tax considerations.”

    This isn’t the first time we’ve seen this kind of play. There’s a pattern, a rhythm, to these large-scale exits. The timing, the valuation, the overall market conditions – all play a part, a complicated dance. It’s a game of chess, in a way. The players are shifting their pieces, and the board is constantly changing.

    The exit of these major investors raises several questions. What does this mean for PhonePe’s future? For Walmart’s long-term strategy in the Indian market? And, perhaps most importantly, what does it signal about the broader tech investment landscape? The answers, as always, are not straightforward.

    The details will become clearer in the coming weeks. But the initial move is made. The stakes are set.

  • TechCrunch Disrupt 2026 Tickets Now on Sale!

    TechCrunch Disrupt 2026 Tickets Now on Sale!

    The hum of servers, a constant thrum in the background, almost drowns out the chatter. It’s early January 2026, and the engineering team at a San Francisco-based AI startup is huddled around a monitor, running thermal tests on the latest GPU prototypes. Their focus is intense, the air thick with the smell of coffee and the quiet urgency of a looming deadline. They know the stakes: the next generation of AI models hinges on the performance of this hardware, and the pressure is on.

    Meanwhile, across town, the announcement everyone’s been waiting for dropped: TechCrunch Disrupt 2026 tickets are officially on sale. The event, scheduled for October 13-15 in San Francisco, promises to be a pivotal gathering. Over 10,000 tech leaders, founders, and venture capitalists are expected to attend, making it a prime opportunity to network and get a glimpse of the technologies set to shape the coming years.

    As per reports, early registrants can save up to $680 on their tickets. Plus, the first 500 people to register get a +1 pass at half price. It’s a move that underscores the event’s commitment to accessibility and the value it places on fostering connections within the tech community. The deals, as they say, won’t last forever.

    One of the key themes expected to dominate the conference is the evolution of AI hardware. Analysts at JP Morgan predict that the demand for advanced GPUs will surge in 2026, driven by the rapid growth of large language models (LLMs). The firm forecasts a 40% increase in demand for high-end GPUs, a trend that is already putting pressure on manufacturing capacities. The supply chain, still reeling from the effects of the 2024 chip shortages, faces another challenge. It seems like the constraints imposed by export controls and domestic procurement policies are complicating matters further.

    “The industry is at a critical juncture,” said Sarah Chen, a senior analyst at Gartner, during a recent briefing. “The ability to scale AI models depends directly on the availability of cutting-edge hardware. The next few months will be crucial.”

    The race to secure the best hardware is on. Companies are scrambling to get their hands on the latest chips, with the M300 and future iterations set to define the next generation of AI. Of course, the competition is fierce, and the stakes are high, but the potential rewards are even greater. It’s a complex landscape, a blend of technological innovation and geopolitical maneuvering, all playing out in real-time.

    The release of tickets for TechCrunch Disrupt 2026 feels like a tangible marker of this progress. It’s a chance to see what’s next, to hear from the people at the forefront of these advancements. And for those in the industry, it’s a reminder that the future is being built, brick by digital brick, right now.

  • Amazon EC2 G7e: NVIDIA RTX PRO 6000 Powers Generative AI

    Amazon EC2 G7e: NVIDIA RTX PRO 6000 Powers Generative AI

    The hum of the server room is a constant, a low thrum that vibrates through the floor. It’s a sound engineers at AWS, and probably NVIDIA too, know well. It’s the sound of progress, or at least, that’s how it feels when a new instance rolls out.

    Today, that sound seems a little louder. AWS announced the launch of Amazon EC2 G7e instances, powered by the NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs. According to the announcement, these instances are designed to deliver cost-effective performance for generative AI inference workloads, and also offer the highest performance for graphics workloads.

    The move is significant. These new instances build on the existing G5g instances, but with the Blackwell architecture, promises up to 2.3 times better inference performance. That’s a serious jump, especially with the surging demand for generative AI applications. It’s a market that’s really exploded over the last year, and AWS is clearly positioning itself to capture a larger share.

    “This is a critical step,” says John Peddie, President of Jon Peddie Research. “The demand for accelerated computing continues to grow, and these new instances will provide customers with the performance they need.” Peddie’s firm forecasts continued growth in the cloud-based AI market, with projections showing a 30% year-over-year expansion through 2026.

    The technical details are, of course, complex. The Blackwell architecture, with its advanced multi-chip module design, is a game-changer. It allows for increased memory bandwidth and faster inter-chip communication. The RTX PRO 6000 GPUs, specifically, are built for handling the intense computational demands of AI inference. That’s what it’s all about, really.

    Meanwhile, the supply chain remains a key factor. While NVIDIA has ramped up production, constraints are still present. The competition for silicon is fierce, and the ongoing geopolitical tensions, particularly surrounding export controls, add another layer of complexity. SMIC, the leading Chinese chip manufacturer, is still behind TSMC in terms of cutting-edge manufacturing. That’s a reality.

    By evening, the news was spreading through Slack channels and industry forums. Engineers were already running tests, comparing performance metrics, and assessing the new instances’ capabilities. The promise of faster inference times and improved graphics performance was a compelling draw, and the potential for cost savings was an added bonus.

    And it seems like this is just the beginning. The roadmap for cloud computing is constantly evolving. In a way, these new instances are just a single node in a vast and intricate network. A network that’s still being built.

  • Amazon EC2 G7e: NVIDIA RTX PRO 6000 Powers Generative AI

    Amazon EC2 G7e: NVIDIA RTX PRO 6000 Powers Generative AI

    The hum of the servers is a constant, a low thrum that vibrates through the floor of the AWS data center. It’s a sound engineers know well, a symphony of silicon and electricity. Today, that symphony has a new movement: the arrival of Amazon EC2 G7e instances, powered by NVIDIA’s RTX PRO 6000 Blackwell Server Edition GPUs. This is, at least according to AWS, a significant leap forward.

    These new instances, announced in a recent blog post, are designed to boost performance for generative AI inference workloads and graphics applications. The key selling point? Up to 2.3 times the inference performance compared to previous generations, which, depending on the application, could mean a huge difference in cost and efficiency. It seems like a direct response to the increasing demand for AI-powered applications across various industries.

    “The market is clearly shifting,” explained tech analyst, Sarah Chen, during a recent briefing. “Companies are looking for ways to run these complex models without breaking the bank. The G7e instances, with the Blackwell GPUs, are positioned to address that need.” Chen also noted that the move is a direct challenge to competitors.

    The Blackwell architecture itself is a significant upgrade. NVIDIA has been working on this for years, and the Server Edition of the RTX PRO 6000 is built for the demanding workloads of the cloud. The focus is on delivering high performance at a manageable cost, important in a market where every watt and every dollar counts. This is something that could be very attractive for startups and established players alike.

    Earlier this year, analysts at Deutsche Bank projected that the AI inference market would reach $100 billion by 2026. The introduction of more powerful and efficient instances like the G7e, suggests AWS is positioning itself to capture a significant portion of that growth. The supply chain, of course, remains a factor. The availability of advanced GPUs is still a concern, with manufacturing constraints at places like TSMC and potential export controls adding complexity.

    The announcement also highlights the ongoing competition in the cloud computing space. Other providers are also racing to provide the best and most cost-effective solutions for AI and graphics workloads. For the engineers on the ground, it’s a constant race to optimize performance, manage power consumption, and ensure that the infrastructure can handle the ever-increasing demands of AI. This is probably why the air in the data center always feels so charged.

    By evening, the initial excitement has died down, replaced by a quiet focus. The engineers are running tests, tweaking configurations, and monitoring performance metrics. The new instances are live, and the clock is ticking. The market is waiting, and AWS is ready.

  • Grubhub Acquires Claim: Restaurant Loyalty Shakeup

    Grubhub Acquires Claim: Restaurant Loyalty Shakeup

    The news hit the wires on January 20, 2026, or so the reports indicated. Grubhub’s parent company, the folks over at Just Eat Takeaway.com, had made a move. They’d acquired Claim, a startup focused on restaurant rewards programs. The deal, still unfolding in terms of its full impact, is designed to give restaurants on the Grubhub platform access to Claim’s customer acquisition and retention tools. And, of course, allow Grubhub diners to earn rewards.

    It’s a strategic play, no doubt about it. The online food delivery sector is a battlefield, and every advantage matters. The acquisition is an attempt, to strengthen Grubhub’s position, to keep diners engaged, and to offer restaurants a more robust suite of services. The terms of the deal weren’t immediately disclosed, but market analysts were already crunching numbers, trying to estimate the long-term implications.

    The move comes at a time of shifting consumer behavior. The pandemic changed everything, of course, and the habits formed then still linger. People are still ordering in. But they’re also, more than ever, looking for value. It’s not just about convenience anymore. It’s about loyalty, about feeling appreciated. Or maybe I’m misreading it.

    A source close to the deal, speaking on condition of anonymity, suggested that the acquisition was driven, in part, by a desire to compete more effectively with DoorDash and Uber Eats, the other major players in the space. “It’s a land grab,” this person said, “a play for market share, pure and simple.”

    The implications are broad. According to a report from the National Restaurant Association, the restaurant industry is expected to generate $1.2 trillion in sales in 2026. A significant chunk of that will flow through online platforms. And the companies that can best capture and retain those customers will be the ones that thrive. It’s about more than just food delivery.

    An analyst from the Urban-Brookings Tax Policy Center noted that such acquisitions often trigger a ripple effect. “Changes in the competitive landscape can lead to adjustments in pricing, marketing strategies, and even the types of restaurants that thrive,” she explained. “It’s a dynamic ecosystem.”

    The deal also presents some interesting questions about data privacy and customer behavior. Claim has built its business on understanding how people interact with restaurant loyalty programs. The integration of that data with Grubhub’s existing customer information could create a powerful – and potentially sensitive – dataset. That’s a lot of information.

    Still, the market reacted positively, at least initially. Shares of Just Eat Takeaway.com saw a modest uptick following the announcement. Investors, it seems, are betting on the company’s ability to navigate the complexities of the food delivery market and to leverage the potential of Claim’s technology. The restaurant industry is always evolving.

    In the end, it’s a story about adaptation, about the constant push and pull of the market. And the ever-present need to stay ahead of the curve.

  • AWS Weekly Roundup: Kiro CLI, EC2 X8i, & European Sovereign Cloud

    AWS Weekly Roundup: Kiro CLI, EC2 X8i, & European Sovereign Cloud

    The hum of the servers was a constant presence, a low thrum that vibrated through the floor of the AWS data center in Frankfurt. It was late January 2026, and the team was back from the holidays, diving headfirst into the new year’s updates. The AWS News Blog had just released its weekly roundup, and the buzz was immediate.

    First up, the Kiro CLI, the command-line interface, had some shiny new features. Apparently, it now supports a wider range of instance types, which, according to a blog post, streamlined deployment for the EC2 X8i instances. These instances, launched just a few months prior, were already making waves, promising significant performance gains for compute-intensive workloads.

    Then, the AWS European Sovereign Cloud. This was a big one. The initiative, designed to provide cloud services within the EU with enhanced data residency and control, was a direct response to increasing regulatory pressures. As per reports, the first phase of this rollout, based in Germany, had already seen a considerable uptake from government agencies and financial institutions. It seemed like a smart move.

    Meanwhile, the EC2 X8i instances themselves were attracting a lot of attention. They boasted improved networking and storage capabilities. An analyst from Gartner, in a recent report, predicted a 20% increase in adoption rates for these instances throughout 2026, driven by demand from AI and machine learning applications. They were built with Intel’s latest Xeon processors, which, for once, seemed to be keeping pace with the demands of the market.

    The team lead, Sarah Chen, leaned back in her chair, a slight frown creasing her brow. “Still waiting on those thermal tests from the Shanghai fab,” she muttered, more to herself than anyone else. The supply chain was… well, it was what it was. US export controls, and the ongoing chip wars, meant that every deployment was a delicate dance.

    The AWS Weekly Roundup also mentioned other updates, including enhancements to the Amazon S3 service and new features for the AWS Lambda compute service. It was, as usual, a flurry of activity, reflecting the relentless pace of innovation in the cloud computing space. It’s kind of overwhelming.

    By evening, the data center was still humming, the team was still working, and the cloud, as always, was expanding. The updates kept coming, and the world kept changing. The European Sovereign Cloud and the EC2 X8i instances, in a way, represented both the promise and the challenges of the future: innovation, regulation, and the ever-present shadow of the global supply chain.