Elon Musk revealed Terafab on Sunday, a massive chip-making facility designed to produce enough processors to power his companies’ ambitious plans for electric cars, humanoid robots, and space exploration. The announcement came during a livestream from the Seaholm Power Plant in Austin, where Musk called it the most epic chip building exercise in history. The project brings together Tesla, SpaceX, and xAI in a joint venture with a price tag between $20 billion and $25 billion.
Musk unveiled Terafab, a $20-25 billion chip facility uniting Tesla, SpaceX, and xAI to fuel ambitious tech goals.
The reasoning behind Terafab is straightforward. Current chip suppliers simply cannot expand fast enough to meet the growing demands of AI, robotics, and space technology. Musk emphasized that without building Terafab, his companies would lack the necessary chips to achieve their goals. Tesla’s corporate account stated the facility aims to produce one terawatt of chip output per year.
The first phase will begin on the Giga Texas campus in Austin. This initial facility will be unique because it houses everything under one roof: logic circuits, memory, packaging, testing, and even lithography mask production. Engineers can design chips, fabricate them, and test wafers within just days. No other facility worldwide offers this rapid iteration capability in a single building.
The full-scale version will require something much bigger. Musk mentioned needing thousands of acres and over 10 gigawatts of power, far exceeding what Giga Texas can handle. Multiple large sites are under consideration for this expansion. The target includes producing 100 to 200 gigawatts of compute power annually on Earth, plus one terawatt of compute power in space.
Terafab will produce specialized chips for different environments. Edge inference chips will power Tesla vehicles and Optimus robots. Space-hardened chips will withstand harsh conditions in orbit, running hotter to reduce radiator weight while resisting radiation damage. Meanwhile, Musk’s companies will continue purchasing from suppliers like TSMC, Samsung, and Micron. Production aims to ramp up toward 2027, though specific timelines remain unclear. AI trading platforms can potentially benefit from such increased on-premises compute capacity for faster model training and inference risk management.




