Home Security Turning Weeks into Hours – How AI is Already Upending Microchip Engineering

Turning Weeks into Hours – How AI is Already Upending Microchip Engineering

by


The modern technological age thrives on the efficient use of microchips. The global microchip market is expected to grow from around US$21.56 billion in 2022 to nearly US$47 billion by 2030 at a compound annual growth rate of over 10%.

The reason microchips grow at such a pace is that they drive all the electronics we see around us. And when we say electronics, it not only includes computers but also accounts for smartphones, network switches, home appliances, car and aircraft components, televisions, amplifiers, IoT devices, and numerous other electronic systems. 

Also called a chip, computer chip, or integrated circuit, a microchip is a unit of integrated circuitry manufactured at a microscopic scale using a semiconductor material, such as silicon or germanium. Intricately designed microchips are so nuanced in their design that they may typically include some specific types of components, such as transistors, resistors, capacitors, and diodes, in millions or even billions. 

These are incredibly small components measured in nanometers. The sophistication of microchip design operates at such a fine level that in 2021, IBM could introduce a microchip-based on 2nm technology, smaller than the width of a strand of human DNA. 

Rapid progress in microchip technology has kept its horizon ever-expanding. Specialized microchips have appeared to manage signals at the intersection of cutting-edge wireless tech. However, these are expensive-to-design options that operate on the principles of miniaturization and high-end engineering. 

Now, a team of collaborating engineers from Princeton Engineering and the Indian Institute of Technology have harnessed AI to reduce the cost and time of designing these chips that will meet the expanding demands for better wireless speed and performance. In the coming segment, we will delve deeper into this breakthrough research. 

AI Creates Complicated Electromagnetic Structures and Associated Circuits in Microchips

The researchers have come up with a methodology for AI to create complicated EM structures and associated circuits in microchips. Before this breakthrough, achieving the same results could have taken weeks of highly skilled work. Now, it could be accomplished in hours. 

To elaborate on what the research could achieve in advanced scientific terms, the researchers demonstrated a universal inverse design approach for arbitrary-shaped complex multi-port electromagnetic structures with designer radiative and scattering properties co-designed with active circuits.

To achieve its objective, the researchers deployed deep learning-based models and demonstrated synthesis with several examples of complex mm-wave passive structures and end-to-end integrated mm-wave broadband circuits. The presented inverse design methodology, which could produce the designs in minutes, could be transformative in opening up a new, previously inaccessible design space, according to researchers. 

According to Kaushik Sengupta, the lead researcher, a professor of electrical and computer engineering, and co-director of NextG, Princeton’s industry partnership program to develop next-generation communications:

“We are coming up with structures that are complex and look random shaped, and when connected with circuits, they create a previously unachievable performance. Humans cannot understand them, but they can work better.”

The research makes it possible for engineers to make these circuits compatible with more energy-efficient operations. They become operable across a significantly wide frequency range. The use of AI helps synthesize inherently complex structures in minutes. Achieving the same results often required conventional algorithms weeks. 

While elaborating on the role of AI in making this efficient design possible, Uday Khankhoje, a co-author and associate professor of electrical engineering at IIT Madras, had the following to say:

“AI powers not just the acceleration of time-consuming electromagnetic simulations, but also enables exploration into a hitherto unexplored design space and delivers stunning, high-performance devices that run counter to the usual rules of thumb and human intuition.”

The classical designs, according to Professor Sengupta, put the circuits and electromagnetic elements together, piece by piece. Changing these structural designs helps incorporate new properties. Sengupta believes that with AI appearing in the scene, the options are much larger compared to the earlier system, which had a finite way of doing the same. 

Where AI makes the difference is in perspective. The intricate geometry of chip circuitry often prohibits human designers from trying out innovative designs. Human designers often do not attempt to understand the level of complexity that such circuitry involves. AI, on the other hand, views the chip as a single artifact, leading to strange but effective arrangements. 

For instance, researchers have explored AI to discover and design complex electromagnetic structures that are co-designed with circuits to create broadband amplifiers. Future research will delve deeper into linking multiple structures and designing full-on wireless chips with AI. 

In summarising the potential of the research, Sengupta had the following to say: “This is just the tip of the iceberg in terms of what the future holds for the field.”

AI-powered chips are revolutionizing the world beyond the tech labs. One company that has been at the forefront of this revolution is Nvidia. The company, almost a year back,  extended its lead in artificial intelligence with the unveiling of a new “super chip.” We shall now look into what makes this chip ‘super’ in the next segment.

Click here to learn about biomimetic olfactory chips.

1. Nvidia (NVDA -1.82%)

The breakthrough announcement of Nvidia came from the company’s annual development conference on the Blackwell series of AI chips. This series of ambitious AI chips was to empower the cutting-edge data centers that train Frontier AI models, such as the latest generations of GPT, Claude, and Gemini. 

The Blackwell B200 is an upgrade over the company’s H100 AI chip. However, the H100 AI chips lag behind when it comes to meeting today’s demands, which are powering massive AI models. According to published reports, training an AI model that is the size of GPT-4 would require 8,000 H100 chips and 15 megawatts of power.

This volume of energy consumed could power about 30,000 typical British homes. However, with the company’s new chips put into action, the same training run would take just 2,000 B200s and 4 MW of power.

While it means a significant reduction in electricity consumption, the real game lies in the second part of the Blackwell line, the GB200 Superchips that squeezes two B200 chips on a single board alongside the company’s Grace CPU to build a system which, Nvidia says, offers “30x the performance” for the server farms that run, rather than train, chatbots such as Claude or ChatGPT.

Microchip, the leading semiconductor supplier of smart, connected, and secure embedded control solutions, has been working with NVIDIA’s solutions as well. In November last year, Microchip released its PolarFire® FPGA Ethernet Sensor Bridge that works with the NVIDIA Holoscan sensor processing platform. The purpose was to empower developers to build artificial intelligence (AI)-driven sensor processing systems. 

NVIDIA Holoscan helps streamline the development and deployment of AI and high-performance computing (HPC) applications at the edge for real-time insights and introduces into a single platform all the required hardware and software systems for low-latency sensor streaming and network connectivity. 

While NVIDIA is helping the industry thrive, reports suggest that it is also growing at a rapid speed. On January 7th, 2025, Nvidia CEO Jensen Huang said that the performance of his company’s AI chips was advancing faster than historical rates set by Moore’s Law, the rubric that drove computing progress for decades.

Huang claimed that the company’s AI chips were moving at an accelerated pace of their own, with its latest data center superchip being more than 30 times faster for running AI inference workloads than its previous generation. While explaining how Nvidia could achieve such rapid speed, Huang said:

“We can build the architecture, the chip, the system, the libraries, and the algorithms all at the same time. If you do that, then you can move faster than Moore’s Law because you can innovate across the entire stack.”

According to Huang,  Nvidia’s latest data center super chip, the GB200 NVL72, is 30 to 40x faster at running AI inference workloads than Nvidia’s previous best-selling chips. The company is focused on creating more performant chips that will be available at lower prices in the long run.

“The direct and immediate solution for test-time computing, both in performance and cost affordability, is to increase our computing capability.”

– Huang

Overall, as claimed by Huang, Nvidia’s AI chips today are 1,000 times better than what it made 10 years ago. 

NVIDIA Corporation (NVDA -1.82%)

For fiscal 2024, Nvidia’s revenue was up 126% to a record $60.9 billion. GAAP earnings per diluted share was $11.93, up 586% from a year ago. Non-GAAP earnings per diluted share were $12.96, up 288% from a year ago. Non-GAAP gross margin was 73.8%.

2. Cadence (CDNS -0.53%)

A leader in electronic systems design, Cadence supplies software and specialized computer servers to Nvidia and Apple. The company has been incorporating AI technology into its products to improve the quality of output as it believes AI can help tackle the problem of semiconductor design, which is becoming increasingly complex as technology continues to advance exponentially.

CEO Anirudh Devgan recently remarked that the company’s AI chip design tools offer chip performance and density benefits similar to the transition to a next-gen process node but without having to move to a new node. 

So, by utilizing AI in chip design, Cadence is optimizing the power, performance, and area (PPA) of chips — which are the most important metrics for chipmakers besides cost, automating tasks like place and route and debugging, and bridging the gap in chip design talent.

For these improvements, the company provides a comprehensive “chips to systems” generative AI solution that Cadence claims to offer 10X productivity gains while enhancing performance across all design domains. In total, the company has five major AI platforms: analog, digital, verification, PCB, and package and system analysis.

According to Albert Zeng, Sr. Software Engineering Group Director in the System Design and Analysis Group at Cadence, stated in a recent interview:

“AI will have a big impact from a productivity point of view because using an AI assistant that can actually extract the data from a past experience can help guide young engineers or designers to make a better decision about their design or fixing it.”

Just over two years ago, Cadence released the very first generative AI technology of the industry called Voltus InsightAI, which automatically finds the source of EM-IR (electromigration and voltage) drop violations right at the beginning of the design process, which is a computationally expensive task. Cadence’s solution then chooses and applies the most effective fixes to improve the PPA. 

In recent months, Cadence has collaborated with TSMC to boost productivity and improve the performance of AI-driven advanced-node designs as well as three-dimensional integrated circuits (3D-ICs) in order to meet what it says is “unprecedented demand” for advanced silicon solutions that can handle massive amounts of datasets and computations. With that, AI-driven design flows are now available for the latest TSMC N2P (2nm-class), which offers 5% – 10% higher performance, and N3 technologies. 

The $80.2 billion market cap company, whose shares are currently trading at $292.50, reported an almost 20% increase in its revenue for the quarter ending in September. With this biggest jump in at least six quarters, the revenue came in at $1.22 bln. The company is actually betting on a boom in generative AI to continue to drive demand for its products and, as a result, raised the midpoint of its adjusted annual profit forecast to $5.90 per share. 

Cadence Design Systems, Inc. (CDNS -0.53%)

Besides this boom, Cadence’s new generation of Palladium supercomputers can also contribute to this growth. The company’s GAAP operating margin for the quarter was 29%, and the non-GAAP operating margin was 45%. GAAP diluted net income per share was $0.87, and the non-GAAP diluted net income per share was $1.64. Cadence also reported a backlog of $5.6 billion.

Conclusion

One of the most rapidly advancing technologies of this decade is quite simply AI, which has already begun to affect businesses across industries and is projected to contribute over $15 trillion to the global economy by the end of this decade. Now, the integration of AI in microchip design can further help streamline tasks, unlocking yet another level of efficiency, performance, and scalability.

Click here to learn about the quantum chip milestone that is stoking excitement for future of computing.



Source link

Related Articles

xxxanti beeztube.mobi hot sexy mp4 menyoujan hentaitgp.net jason voorhees hentai indian soft core chupatube.net youjzz ez2 may 8 2023 pinoycinema.org ahensya ng pamahalaan pakistani chut ki chudai pimpmovs.com www xvedio dost ke papa zztube.mobi 300mbfilms.in صور مص الزب arabporna.net نهر العطش لمن تشعر بالحرمان movierulz plz.in bustyporntube.info how to make rangoli video 穂高ゆうき simozo.net 四十路五十路 ロシアav javvideos.net 君島みお 無修正 افلام سكس في المطبخ annarivas.net فيلم سكس قديم rashmi hot videos porncorn.info audiosexstories b grade latest nesaporn.pro high school girls sex videos real life cam eroebony.info painfull porn exbii adult pics teacherporntrends.com nepali school sex