News

The Unknowns of AI—and Why We Need Climatetech Startups to Help Us Navigate Them

At Greentown Labs, we’re working to catalyze climatetech innovation that addresses the world’s most urgent energy and sustainability challenges. At the latest Greentown x Lux Research Industry Forum, we explored how the rapid growth of artificial intelligence (AI) is reshaping the landscape of physical infrastructure—and where climate-focused innovation is urgently needed to meet the moment.

AI is no longer an abstract or futuristic technology. It is now a tangible, resource-intensive force driving demand for electricity, water, land, and talent at unprecedented scale. However, massive questions remain about how the industry will continue to take shape—and how costly its infrastructure will be in terms of capital and resources. Opinions abound, from some debating whether AI will be “the mother of all economic bubbles to others speculating about what happens if that bubble bursts. Extensive attention is being put on the energy mix powering data centers as well as the potential cost to rate payers near build-outs, while others are focusing on how data center development could achieve long-term sustainability or how AI itself could help manage the grid that it’s potentially straining. 

Whether forum participants were there representing a startup or a corporate, there was clear alignment across the group: the infrastructure enabling AI has become one of the most important climate, economic, and innovation challenges of our time.

AI Is Accelerating—and Straining the Physical World

The expansion of large language models, generative AI, and real-time inference is pushing infrastructure beyond its limits. According to Lux Research, global data center electricity demand could more than double by 2030, potentially exceeding 945 terawatt-hours, or roughly the entire electricity consumption of Japan today.

Representing interests from across climatetech, venture capital, AI, infrastructure, and utilities, forum participants voiced a shared sense of urgency: the systems needed to power AI—from transmission lines to cooling systems and trained technicians—are not keeping pace. The result is a convergence of climate, grid, economic, and political pressures, compounded by permitting delays, water constraints, and regional power shortages. 

The discussion focused on the technical realities, emerging bottlenecks, and transformative potential of powering AI systems—while surfacing a number of major unknowns that are shaping how this future unfolds. There was also strong alignment around the idea that the situation requires bold innovation, better coordination across sectors, and a willingness to prioritize and adopt new technologies amid mounting market uncertainty.

Key Themes and Takeaways

1. AI is Already Stressing Infrastructure Limits

Participants expressed concern that data-center deployment is outpacing the capacity of current grid systems, permitting timelines, and water resources. Thermal load, site availability, and substation delays were frequently cited as key bottlenecks. Power—not compute—was framed as the dominant constraint on scaling. But many attendees also acknowledged a larger unknown: How will AI infrastructure scale if chip production, permitting, and workforce readiness remain limited? Setting aside the power concerns that come with rapid-growth scenarios, are slow-growth scenarios even feasible given these additional constraints? These questions introduce risk to near-term forecasts—but also open the door for agile, distributed, and low-footprint alternatives to traditional hyperscale deployments. 

2. Centralized vs. Distributed AI Remains an Open Question

Participants debated whether AI will remain centralized—dominated by a few hyperscalers—or begin shifting to smaller, regional, edge, or industrial deployments. Each model presents tradeoffs:

  • Centralized AI offers performance, cost, and model integration benefits—potentially better suited for large language models or generative applications.
  • Distributed AI could unlock latency reduction, site flexibility, energy optimization, and community integration—potentially better suited for robotics and high-speed industrial applications where latency could be of higher concern. 

Forum participants did not agree on which path would dominate—but noted this divergence presents a significant opportunity for startups to validate new deployment models across industries and geographies.

3. Energy and Water Innovation are Critical

Thermal management and water efficiency emerged as two of the most urgent engineering needs for AI infrastructure. Forum participants showed strong interest in liquid-immersion and zero-water cooling systems; waste-heat recovery and reuse; water-independent siting models; and AI-driven optimization of cooling systems.

Startups in these categories are well positioned to solve for both ESG impact and operational bottlenecks, providing clear business value to data centers regardless of how the larger infrastructure model evolves.

4. AI’s Business Model is Still Evolving

Several participants noted that while technical capacity is growing rapidly, the business model for AI—especially generative and language models—remains unclear. There was deep discussion around questions of how sustainable the current freemium model is and whether future revenues will come from ads, subscriptions, specialized training, custom data sets, integration with physical systems, or elsewhere. This uncertainty creates risk for infrastructure investors—but also space for startups to explore new business cases at the intersection of AI and physical climate systems, such as predictive maintenance, supply-chain optimization, or demand forecasting.

5. Consumer Demand and Use Cases Remain Fluid

Finally, participants questioned the depth and durability of consumer demand. There was speculation that AI infrastructure is scaling faster than real-world use cases justify, especially outside of enterprise and industrial optimization. Forum attendees noted parallels to earlier tech bubbles, where first-mover competition sometimes outran underlying need. The open question, “Is current AI infrastructure growth demand-driven or defensive?,” raises the stakes for climate-aligned deployment: if infrastructure is being built speculatively, startups and governments alike must ensure that what is built is also sustainable, modular, and grid-aware. The group agreed on one thing, though—the likelihood of unused compute is low, as hyperscalers will always make use of any existing GPUs. 

Startup Opportunity Areas

Despite the open questions, the forum highlighted several urgent and actionable innovation fronts for startups:

Grid‑Responsive Power Systems

Infrastructure will need to evolve from always‑on, rigid demand toward power systems that respond dynamically to grid conditions—load flexibility is a clear challenge but highly important. Key aspects include integrating demand response programs (where AI workloads can be shifted or throttled during peak grid stress), utilizing behind‑the‑meter generation to offload some of the burden on transmission, and leveraging renewable energy co‑location. 

Given current interconnection delays and transmission constraints, solutions in this area may include software/firmware that automatically reprioritizes compute jobs based on power price signals or grid availability, and energy-storage or load‑shifting mechanisms that smooth peaks. The idea is to reduce the heavy load that new AI data centers impose on regional grids, especially in areas where grid capacity is already constrained.

Greentown members and alumni innovating in this or adjacent areas include:

Water‑Efficient Cooling

Cooling remains one of the largest consumptions of water and energy in modern data centers. With compute densities rising—GPUs, TPUs, accelerators generating hot spots—traditional air cooling is often insufficient or inefficient. Innovations that reduce water use in thermal management (for example, evaporation‑free or closed‑loop water systems), deploy liquid or immersion cooling more broadly, or reuse waste heat for other processes are becoming increasingly essential. Addressing water usage isn’t just an environmental imperative but also one of financial, permitting, and reputational risk.

Greentown members and alumni innovating in this or adjacent areas include:

Modular and Edge Infrastructure

Because of the delays in permitting, supply-chain constraints for high‑density compute hardware, and grid and land-access issues, there is strong interest in smaller, modular deployments closer to end users or specific industrial or enterprise operations. 

As highlighted above, edge compute models can reduce latency, use localized or smaller‑scale energy and water resources more efficiently, and avoid or mitigate some of the infrastructure overhead associated with large hyperscale facilities. Modular infrastructure also allows for incremental deployment—adding capacity as demand materializes—and increases resilience by distributing risk and reducing single points of failure.

Greentown members innovating in this area include:

AI for Infrastructure Planning

The complexity of siting, designing, and managing AI infrastructure is growing— from evaluating grid capacity to anticipating thermal hotspots, assessing water availability, handling permitting timelines, and navigating local regulations. Infrastructure-planning tools powered by AI can improve forecasting of energy needs, simulate thermal loads and cooling requirements, map suitable sites with minimal environmental or social friction, model interconnection and transmission-path constraints, and even predict where bottlenecks might emerge. Such tools can shorten lead times, reduce cost overruns, and allow both builders and regulators to plan more proactively.

Greentown members and alumni innovating in this or adjacent areas include:

Conclusion

The conversation made it abundantly clear: AI infrastructure is a climate issue—and a climate opportunity. But it’s also a space filled with unknowns; the shape of demand, the pace of buildout, and the dominant players are all still being determined. Startups that can navigate this uncertainty and build solutions that are adaptable, efficient, and sustainable, will be essential in shaping what comes next—regardless of what growth scenario may come to fruition.