Imagine predicting, in a short space of time, how a new molecule could fight a virus, or simulating, with a high level of detail, the weather throughout Brazil over the next few weeks. These tasks, which challenge the limits of science, are feasible today thanks to supercomputing. Also known as high-performance computing (HPC), this technology uses computers with thousands of processors interconnected and operating in parallel to solve complex problems, consolidating itself as one of the pillars of natural sciences and modern engineering. Its relevance is proven by multiple Nobel Prizes, such as those for Chemistry in
and
, awarded to researchers for the study and design of biomolecules using computer simulations.
The practical application of supercomputing is also decisive for innovation: in the pharmaceutical industry, for example, it speeds up the identification and characterization of promising molecules, reducing time and costs in the development of new drugs. Now, Artificial Intelligence (AI) is emerging as a new protagonist. A notable example is AlphaFold , a system that predicted the structure of 200 million proteins from their genetic sequences, revolutionizing biology.
AI-based methods also rely on supercomputing, typically through computers equipped with specialized chips such as GPUs, both for training and tuning models with billions of parameters and for large-scale inference – when these methods are applied in a widespread manner, as in today's large language models (LLMs). Thus, some of today's major scientific and technological advances, whose possibilities have been expanded by AI techniques, are based on supercomputing, and thus require a robust computing infrastructure.
The capacity and organization of these supercomputers are determining factors. In Brazil, the Santos Dumont supercomputer, installed at the National Laboratory for Scientific Computing (LNCC) since 2015 , is a notable example and has been essential for high-impact research unfeasible without this level of computing power. However, the computing capacity installed in the country is still modest compared to our scientific and innovation needs and poorly structured. It is necessary to strengthen the organization of the infrastructure in tiers, from smaller computers installed in laboratories or research institutes, through regional centers, where each state or region of the country has medium-sized machines, to large, world-class national supercomputers.
This model must be followed carefully, ensuring that access to the most powerful machines is granted based on actual need for parallel computing, and not just because of the lack of smaller-scale resources. Further, funding agencies must establish policies to continuously fund these different tiers of computing infrastructure, preventing them from aging and thus losing capacity.
Two recent initiatives stand out in the expansion of this infrastructure. The HPC-FAPESP call for proposals, concluded in 2023 , will result in the creation of the São Paulo Scientific SuperComputing Center (C3SP), a major step towards expanding computing capacity in our state. Moreover, the project foresees the revitalization of five regional centers throughout the country, which make up the National High Performance Processing System (Sinapad). These initiatives contribute to the organization of a tiered infrastructure, as mentioned above. Meanwhile, the Brazilian Artificial Intelligence Plan (PBIA 2024-2028) provides for investments of up to R$3 billion from the National Fund for Scientific and Technological Development (FNDCT) to modernize the country's computing infrastructure. This promising funding must go beyond the acquisition of equipment: it is crucial to ensure long-term maintenance, with hardware upgrades and stable electricity, since a supercomputer can consume as much as a small town. Therefore, the sustainability of this infrastructure must be a priority, with energy-efficient solutions such as cooling with recycled water or even the use of solar energy.
Also a relevant aspect is the location of these supercomputers. One option is to install them in data centers (or DPCs) directly linked to universities and research centers. The infrastructure can be implemented in traditional buildings, warehouses, or even containers, in the latter case allowing for greater mobility and agility in installation. Further, the technical team responsible for the operation can be trained at the center itself, contributing to training experts in the field. Another possibility is the “colocation” model, in which equipment is hosted in private data centers, with outsourced maintenance. An emerging alternative is the creation and sharing of data centers through public-private partnerships (PPP) or state-owned companies such as Prodesp or Serpro. The recent R$2 billion credit line announced by the BNDES for the construction of national data centers is a positive sign.
Regardless of the model adopted, the crucial part of this ecosystem is training qualified personnel. The next generation of scientists and engineers must be prepared to operate and innovate in these computing environments. Here we have a strategic opportunity: to link government funding to practical training. For example, funding agencies can offer specific grants for research or innovation projects that use supercomputing. Additionally, partnerships between universities and high-tech companies, such as IBM, USP's partner in the recent C4AI center , could train students and professionals in advanced AI and supercomputing tools, allowing the country to compete in this area.
Finally, we need an integrated vision for the future. Initiatives such as the Brazilian AI Plan, calls for proposals, and credit lines for the installation of computing infrastructure are vital, but they need to be part of a long-term vision for national development. The convergence of supercomputing and Artificial Intelligence brings academia and science closer to technological innovation and the production chain in an unprecedented way.
AI requires a solid scientific basis, making collaboration between universities, research centers, and industry essential. Brazil thus has the opportunity to use HPC and AI not just to publish scientific articles, but to transform knowledge into actual innovation – from affordable drugs to disaster prediction systems. To achieve this, however, machines are not enough: we need to invest in people, ethics, and an infrastructure that unites academia, government, and the productive sector in a cohesive and sustainable strategy.
Acknowledgments: The author would like to thank Pedro Dias (Institute of Astronomy, Geophysics and Atmospheric Sciences – USP) and Antônio Tadeu Gomes (National Laboratory of Scientific Computing) for excellent discussions and suggestions on the text.
(The opinions expressed in the articles published in Jornal da USP are the sole responsibility of their authors and do not reflect the opinions of the newspaper or the institutional positions of the University of São Paulo. Access our editorial parameters for opinion articles here).
English version: Nexus Traduções
Política de uso