Home Security A Hub for Evolution – Artificial Intelligence Versatility on Display with Recent Achievements

A Hub for Evolution – Artificial Intelligence Versatility on Display with Recent Achievements

by


Innovations in artificial intelligence are shaping the future of businesses across almost all sectors. From healthcare, manufacturing, finance, education, entertainment, and legal to media, customer service, transportation, and more, virtually no major industry hasn’t been influenced by AI.

According to an IBM survey from 2023, 42% of enterprise-scale businesses have already integrated AI into their operations, while another 40% are considering the technology for their organizations.

This makes sense, given that AI has the potential to transform productivity and, in turn, an economy’s GDP potential. 

According to PWC estimates, AI will potentially contribute $15.7 trillion to the global economy by the end of this decade, with 45% of total economic gains coming from AI-driven product enhancements in the form of affordability, attractiveness, variety, and increased personalization, stimulating consumer demand. Meanwhile, $6.6 trillion of the boost to the GDP in local economies from AI is likely to come from increased productivity, as per PWC.

AI is fast becoming a key source of disruption and competitive advantage, acting as a central hub for advancing nearly every industry. This huge potential and versatility of AI can be seen in recent advancements made with the help of this technology. 

Predicting Thermal Properties

An interesting application of AI is predicting the thermal properties of materials. This can help engineers design faster microelectronic devices and more efficient energy-conversion systems while reducing waste heat.

Understanding the relationship between structure and property is important when designing materials with specific properties. Already, significant progress has been made in machine learning methods in this regard. However, challenges remain in terms of models’ generalizability and prediction of properties.

So, the latest research presented a virtual node graph neural network (VGNN) to tackle these issues. In their virtual node model, the researchers were able to accomplish Γ-phonon spectra and full phonon dispersion prediction just from atomic coordinates. By combining their approach with the ML interatomic potentials, the team accomplished much higher efficiency with better accuracy.

The ability to calculate phonon band structures quickly and accurately is critical because an estimated 70% of the energy produced worldwide actually ends up as waste heat. If scientists can predict how heat moves through insulators and semiconductors, more efficient power generation systems can be designed. 

The problem with all this is that the thermal properties of materials can be very difficult to model. This is because of phonons, a quantum of vibrational mechanical energy. 

These subatomic particles carry heat, and some of a material’s thermal properties depend on the phonon dispersion relation, which is the relationship between the energy of phonons and their momentum in the crystal structure. Not only is this difficult to incorporate into a system’s design, but acquiring it also poses significant challenges. 

According to Senior Author Mingda Li, who’s an associate professor of nuclear science and engineering:

“Phonons are the culprit for the thermal loss, yet obtaining their properties is notoriously challenging, either computationally or experimentally.”

It’s because of their extremely wide frequency range that heat-carrying phonons are so hard to predict. Moreover, these particles travel and interact at varying speeds.

Researchers have been trying to estimate phonon dispersion relations through ML for years now, but the models get stuck because they involve many high-precision calculations. 

“If you have 100 CPUs and a few weeks, you could probably calculate the phonon dispersion relation for one material. The whole community really wants a more efficient way to do this.”

– Co-lead author Ryotaro Okabe, a chemistry graduate student

The ML models used to make high-precision calculations for estimating phonon dispersion relations are called graph neural networks (GNN). These networks convert a material’s atomic structure into a crystal graph.

The crystal graph comprises multiple nodes connected by edges. The nodes represent atoms, while the edges act as the interatomic bonding between atoms.

GNNs have been working well for calculating electrical polarization and magnetization, among other quantities. However, they are simply not flexible enough to predict the phonon dispersion relation accurately, which is an incredibly high-dimensional quantity.

Modeling phonons’ momentum space with a fixed graph structure doesn’t do the job because they travel around atoms on different axes. This calls for a need for flexibility, which the researchers brought through virtual nodes.

While graph nodes are used to represent atoms, the team reviewed the idea to come to “graph nodes can be anything. And virtual nodes are a very generic approach you could use to predict a lot of high-dimensional quantities.”

By adding flexible virtual nodes to the fixed crystal structure, the team created a new framework called a virtual node graph neural network (VGNN). By enabling the output of VGNN to vary in size, it doesn’t get restricted by the fixed crystal structure.

However, these virtual nodes are only capable of receiving messages from real nodes. So, while they get updated along with the real nodes during computation, the virtual nodes do not affect the model’s accuracy.

As co-lead author Abhijatmedhi Chotrattanapituk, who’s an electrical engineering and computer science graduate student, explained, the real nodes have no idea that virtual ones are there. He said:

“The way we do this is very efficient in coding. You just generate a few more nodes in your GNN.”

By having virtual nodes represent phonons, the VGNN model doesn’t have to perform many complex calculations when predicting the phonon dispersion relation, making it more efficient than GNN.

The new AML framework created by researchers from MIT and elsewhere has actually been found to predict phonon dispersion relations as much as 1 million times faster than traditional, non-AI-based approaches. Even compared to other AI-based techniques, this new framework is 1,000x faster with comparable or even better accuracy. 

When predicting a material’s heat capacity, the researcher found the model to be slightly more accurate, with prediction errors being two orders of magnitude lower with their technique in some cases.

According to the researchers, the model can estimate phonon dispersion relations for a few thousand materials in just a few seconds using a personal computer. This allows for the exploration of more materials with specific thermal properties. It can even be used to calculate phonon dispersion relations in alloy systems, which is particularly challenging for traditional models.

In microelectronics, where managing heat is a big challenge to make them faster, the new method can be extremely beneficial and help develop more efficient microelectronics. Moreover, the method can aid in the design of energy generation systems that produce more power and more efficiency. 

Researchers propose three versions of the new model, each capable of estimating phonons directly from the atomic coordinates of material but with increasing complexity.

Here are two companies that can benefit from this AI-related development in predicting the thermal properties of materials:

#1. Intel

As a leading microprocessor manufacturer, managing heat is crucial for Intel. Improved AI models can help design faster, more efficient processors with better heat dissipation, boosting product performance and lifespan and making Intel more competitive.

finviz dynamic chart for  INTC

Additionally, better thermal management can lead to energy savings and lower operational costs, benefiting both Intel and its customers. In 2023, Intel reported revenues of $54.2 billion and a net income of $1.7 billion, with a gross margin of 40%

#2. NVIDIA

Efficient thermal management is essential for NVIDIA’s high-performance GPUs used in data centers, gaming, and AI applications. Enhanced AI models can lead to better cooling solutions, improving product performance and reliability. This paves the way for designing energy-efficient AI systems, strengthening NVIDIA’s market position.

finviz dynamic chart for  NVDA

Financially speaking, NVIDIA reported revenues of almost $27 billion and a net income of almost $4.4 billion in 2023, with a gross margin of 64.1%.

Ensuring Parity

Ensuring Parity

A separate paper, meanwhile, improved fairness by introducing structured randomness to scarce resource allocation with AI. This allows ML-based model predictions to address innate uncertainties without compromising efficiency.

Over the past year, the popularity of generative AI like ChatGPT has made the technology an integral part of businesses. Organizations have been increasingly turning to ML models to allocate their scarce resources, such as welfare benefits. This usage can range from screening resumes to selecting candidates for job interviews to healthcare providers ranking patients for the limited supply of life-saving medical resources such as ventilators or organs based on their survival rate.

When using AI models, one aims to achieve fair predictions by reducing bias. This is often accomplished with techniques like calibrating the scores generated or adjusting the model’s features to make decisions. 

While it is traditionally believed that algorithms are fair, a new paper by researchers from Northeastern University and MIT argues that achieving fairness using ML often requires randomness. Their analysis found that randomization is particularly beneficial when a model’s decisions involve uncertainty. Also, when the same group consistently receives negative decisions, randomization must be applied to improve fairness.

The researchers presented a framework to introduce specific randomization into the decisions of the model. The method can be tailored to fit individual situations to improve fairness without damaging the accuracy or hurting the effectiveness of a model.

“Even if you could make fair predictions, should you be deciding these social allocations of scarce resources or opportunities strictly off scores or rankings? As things scale and we see more and more opportunities being decided by these algorithms, the inherent uncertainties in these scores can be amplified. We show that fairness may require some sort of randomization.”

– The lead author, Shomik Jain, a graduate student in IDSS

This new research is built on a previous paper, which explored the harms of using deterministic systems at scale and found ML models’ usage in deterministically allocating resources to amplify existing inequalities and reinforcing bias. According to senior author Ashia Wilson, a principal investigator in the LIDS:

“Randomization is a very useful concept in statistics, and to our delight, satisfies the fairness demands coming from both a systemic and individual point of view.”

Exploring when randomization can improve fairness, the latest paper adopted the concept of lotteries’ value in achieving fairness from philosopher John Broome to argue the need for randomization in scarce resource settings to honor all the claims by giving each person a chance. Jain said:

“When you acknowledge that people have different claims to these scarce resources, fairness is going to require that we respect all claims of individuals. If we always give someone with a stronger claim the resource, is that fair?”

A deterministic allocation in which a stronger claim always gets the resource can cause systemic exclusion or lead to compounding injustice. Machine-learning models can also make mistakes, which are repeated when using a deterministic approach.

The paper noted that randomization can help overcome these problems. However, not all decisions should be equally randomized. A less certain decision should have more randomization.

For instance, kidney allocation involves projecting lifespan, which is highly uncertain, and when there are two patients that are only five years apart, then it becomes even harder to measure. Wilson said:

“We want to leverage that level of uncertainty to tailor the randomization.”

To determine the extent of randomization required in different conditions, the researchers used statistical uncertainty quantification methods to show that calibrated randomization can produce fairer outcomes without significantly affecting the effectiveness or utility of the model.

“There is a balance to be had between overall utility and respecting the rights of the individuals who are receiving a scarce resource, but oftentimes the tradeoff is relatively small.”

– Wilson

While randomization can be really beneficial in improving fairness in areas like college admissions, the research also noted situations like criminal justice where randomizing decisions could actually harm individuals instead of improving fairness. 

In the future, the researchers plan to study other use cases and investigate the effect of randomization on other factors, such as prices and competition, and how that can be used to improve the ML Models’ robustness. Now, we will look at two companies that can benefit significantly from this development:

#1. UnitedHealth Group 

UnitedHealth Group Inc. can enhance fairness in patient care management and resource distribution by incorporating structured randomness in AI models. This approach reduces biases and ensures equitable access to treatments, aligning with UnitedHealth’s commitment to providing high-quality, affordable care.

finviz dynamic chart for  UNH

It reported second-quarter 2024 revenues of $98.9 billion, reflecting a $6 billion increase year-over-year.

#2. Pfizer 

Pfizer Inc. can use structured randomness in AI to ensure fair patient selection in clinical trials and equitable allocation of experimental treatments. This approach will support Pfizer’s mission to advance health equity and benefit a broader population.

finviz dynamic chart for  PFE

Revenue-wise, Pfizer reported annual revenues of $58.5 billion in 2023.

Personalized Language Learning Systems

Another interesting application of AI is being realized in generating personalized storybooks to help children with language learning. By using generative AI and home IoT technology, the latest study aims to offer an effective and customized way to help children get better at processing speech and communication. 

Language development in children is of vast importance, given that it impacts their cognitive and academic growth. Given the role it plays in children’s overall social development, language progress must be regularly evaluated so that timely language interventions can be provided

Traditionally, a one-size-fits-all approach is used via standardized vocabulary lists and pre-made material for language skill assessments and interventions. This is despite the fact that children learn language by interacting with their environments, and because they grow up in diverse surroundings, it leads to variation in vocabulary exposure.

To overcome the shortcomings of this conventional approach, a team of researchers developed an innovative educational system that is tailored to each child’s unique environment. 

This personalized language learning system is called “Open Sesame? Open Salami! (OSOS).” It combines speech pathology theory with practical expertise and accommodates variations in the language development of children through an individualized weighting of factors and flexible vocabulary selection criteria. 

Powered by generative AI and pervasive sensing, OSOS profiles a child’s language environment, extracts personally tailored priority words, and creates custom storybooks naturally including those words. It comprises three major modules:

  1. Personalized Language Profiler
  2. Target Vocabulary Extractor
  3. Personalized Intervention Aid Generator

The Profiler is to be deployed at home and embedded in home appliances or smart speakers to collect speech samples. Parents will control when to start and stop recording.

For this purpose, home IoT devices were utilized to capture and monitor the daily environment and language exposure of children. The children’s vocabulary was then examined using speaker separation, which identifies and isolates different speakers, and morphological analysis techniques to assess the smallest semantic units of language.

The Extractor analyzes the utterances and extracts a selectable prioritized list of words recommendable for the child. Each word is analyzed by calculating scores for them based on crucial factors related to speech pathology.

The Generator, meanwhile, provides intervention in the form of storybooks, a common clinical practice and part of most children’s natural routines. To create personalized materials, the team used advanced generative AI technologies, including GPT-4 and Stable Diffusion. These solutions allowed them to produce bespoke books that seamlessly integrate each child’s target vocabulary. 

The team tested the personalized language learning system with nine families over a four-week period. The results demonstrated the system’s applicability in everyday settings and effectively showed children’s learning of the target vocabulary.

“Our goal is to leverage AI to create customized guides tailored to different individuals’ levels and needs.”

– Lead author Jungeun Lee from POSTECH

The two companies below can benefit from AI-powered personalized language learning systems:

#1. Amazon

Amazon, with its extensive AI and IoT capabilities, can integrate personalized language learning systems into its smart home devices like Alexa. This would allow parents to use Alexa to capture and analyze their children’s language development in real-time, offering tailored learning experiences.

finviz dynamic chart for  AMZN

In 2023, Amazon’s total revenue grew by 12% to $575 billion, with North America, International, and AWS segments contributing significantly. 

North American revenue rose 12% to $353 billion, International revenue increased 11% to $131 billion, and AWS revenue climbed 13% to $91 billion. Its operating income surged from $12.2 billion in 2022 to $36.9 billion in 2023, while free cash flow turned from negative $11.6 billion in 2022 to positive $36.8 billion.

#2. Alphabet Inc. (Google)

Google can deploy personalized language learning systems using its Google Home and Nest devices. Utilizing Google’s AI expertise, these devices can provide customized learning content and track language development, aiding in more effective language interventions for children.

finviz dynamic chart for  GOOGL

In 2023, Alphabet Inc.’s total revenue grew to $307.4 billion, up from $282.8 billion in 2022. Google Services, including Google Search and YouTube ads, generated $272.5 billion, while Google Cloud earned $33.1 billion. Operating income rose to $84.3 billion, with Google Cloud turning a $1.9 billion loss in 2022 into a $1.7 billion profit. Total assets reached $402.4 billion, including $110.9 billion in cash.

Conclusion

As we have seen with these recent achievements, AI has diverse use cases ranging from helping children with language development to designing more efficient energy-conversion systems and high-performance microelectronic devices. This goes to show just how powerful AI’s versatility is. 

AI’s ability to handle large volumes of data, perform repetitive tasks efficiently, learn from data, and improve over time makes it a truly disruptive force for a wide range of industries that innovate, improve their efficiency and productivity, reduce costs, and enhance decision-making. Against this backdrop, AI exhibits vast potential, which may very well extend beyond our current estimates, to transform industries and, in turn, our lives.

Click here to learn all about investing in artificial intelligence.



Source link

Related Articles

xxxanti beeztube.mobi hot sexy mp4 menyoujan hentaitgp.net jason voorhees hentai indian soft core chupatube.net youjzz ez2 may 8 2023 pinoycinema.org ahensya ng pamahalaan pakistani chut ki chudai pimpmovs.com www xvedio dost ke papa zztube.mobi 300mbfilms.in صور مص الزب arabporna.net نهر العطش لمن تشعر بالحرمان movierulz plz.in bustyporntube.info how to make rangoli video 穂高ゆうき simozo.net 四十路五十路 ロシアav javvideos.net 君島みお 無修正 افلام سكس في المطبخ annarivas.net فيلم سكس قديم rashmi hot videos porncorn.info audiosexstories b grade latest nesaporn.pro high school girls sex videos real life cam eroebony.info painfull porn exbii adult pics teacherporntrends.com nepali school sex