Foxconn Unveils First Large Language Model Dubbed ‘FoxBrain’
New Mania Desk / Piyal Chatterjee / 10th March 2025

Taiwan’s Foxconn announced on Monday the introduction of its initial large language model and intends to utilize the technology to enhance manufacturing and supply chain operations.
The model, referred to as “FoxBrain,” was developed utilizing 120 of Nvidia’s H100 GPUs and finished in roughly four weeks, according to a statement from the globe’s biggest contract electronics manufacturer.
The firm that builds iPhones for Apple and manufactures Nvidia’s AI servers stated that the model is derived from Meta’s Llama 3.1 framework. According to the company, it is Taiwan’s initial large language model with reasoning abilities, tailored for traditional Chinese and Taiwanese dialects.
Foxconn stated that while there is a minor performance difference when compared to DeepSeek’s distillation model in China, its overall performance closely aligns with world-class standards.
Originally created for internal use, FoxBrain encompasses data analysis, decision-making assistance, document collaboration, mathematics, logical reasoning, problem-solving, and code generation.
Foxconn stated that it intends to work with tech partners to broaden the model’s uses, share its open-source data, and advance AI in manufacturing, supply chain management, and smart decision-making.
Foxconn stated that Nvidia aided with its Taiwan-based supercomputer “Taipei-1” and provided technical consulting throughout the model’s training. Taipei-1, Taiwan’s biggest supercomputer, is managed and owned by Nvidia in Kaohsiung, a city located in the southern part of the island.