APL Machine Learning Applications: Advancements and Innovations
Introduction
Artificial intelligence (AI) and machine learning (ML) are rapidly transforming various fields, and the Applied Physics Laboratory (APL) is at the forefront of developing and applying these technologies to address critical national challenges. This article explores recent advancements and applications of machine learning, highlighting innovative approaches and their potential impact across diverse domains.
Brain-Inspired Learning in Artificial Neural Networks
Artificial neural networks (ANNs) have become indispensable tools in machine learning, demonstrating remarkable success in areas like image and speech generation, game playing, and robotics. However, significant differences exist between ANNs and the biological brain, particularly in learning processes. Current research focuses on integrating biologically plausible mechanisms, such as synaptic plasticity, into ANNs to enhance their capabilities. This approach aims to bring us closer to understanding the essence of intelligence.
Complete Representation of Atomic Structures
In physics and theoretical chemistry, obtaining a comprehensive and symmetric representation of point particle groups, such as atoms in a molecule, is crucial. This is particularly important with the widespread adoption of machine-learning techniques in science. Some commonly used descriptors for representing point clouds struggle to distinguish between special arrangements of particles in three dimensions, limiting the ability to accurately learn physical relationships. A novel approach involves constructing descriptors based on the relative arrangement of particle triplets, enabling the creation of symmetry-adapted models with universal approximation capabilities.
Accelerating Defect Predictions in Semiconductors
First-principles computations are reliable for predicting the energetics of point defects in semiconductors, but their use is limited by computational expense. Machine learning models, especially graph neural networks (GNNs), can accelerate defect predictions by encoding defect coordination environments. A framework has been developed for predicting and screening native defects and functional impurities in semiconductors using crystal GNNs trained on high-throughput density functional theory (DFT) data. This framework utilizes a large computational defect dataset and achieves high accuracy in predicting defect formation energy and screening defect candidates.
Reinforcement Learning for Discovering Metastable Boron Phases
Boron, an element with complex chemistry, has been the subject of much research. While computational studies have illuminated some stable boron allotropes, the demand for multifunctionality necessitates exploring metastable phases. A workflow using reinforcement learning coupled with decision trees, such as Monte Carlo tree search, has been introduced to search for stable and metastable boron phases, with enthalpy as the objective. This approach has led to the discovery of new boron metastable phases and the construction of a phase diagram that locates their phase space at different levels of metastability.
Read also: Read more about Computer Vision and Machine Learning
In-Memory/In-Sensor Reservoir Computing
To overcome the energy consumption and computational speed limitations of deep learning on digital computers, reservoir computing (RC) has gained attention. In-memory or in-sensor computers leverage emerging electronic and optoelectronic devices for data processing at the point of data storage or sensing. This technology significantly reduces energy consumption by minimizing data transfers between sensing, storage, and computational units. Recent advancements in in-memory/in-sensor RC include algorithm designs, material and device development, and downstream applications in classification and regression problems.
Deep Learning for Wind Farm Layout Optimization
Wind turbine wakes significantly impact wind farm performance, reducing energy production and increasing fatigue loads on downstream turbines. Wind farm turbine layouts are designed to minimize wake interactions using predictive models, including analytical wake models and computational fluid dynamics (CFD) simulations. To address the time-consuming and computationally expensive nature of CFD simulations, a deep convolutional hierarchical encoder-decoder neural network architecture, DeepWFLO, has been proposed as an image-to-image surrogate model for predicting the wind velocity field for Wind Farm Layout Optimization (WFLO). A model-fusion strategy uses analytical wake models to generate an additional input channel for the network, further improving prediction accuracy.
Contextual Beamforming
Contextual beamforming leverages location and AI to enhance wireless telecommunication performance by improving beamforming processes. This approach offers significant signal-to-interference-plus-noise ratio improvements and unlocks new possibilities for reliable and efficient mobile networks. By capitalizing on location information and employing advanced AI techniques, the field can overcome challenges and unlock new possibilities for delivering reliable and efficient mobile networks. The combination of maximum ratio transmission (MRT) and zero-forcing techniques, alongside deep neural networks employing Bayesian optimization, represents a promising approach for contextual beamforming.
Self-Learning Electronic Circuits
Self-learning electronic circuits, or "physical learning machines," offer a path to analog hardware that directly employs physics to learn desired functions from examples at a low energy cost. By using good initial conditions and a new learning algorithm, energy consumption can be further reduced. However, a trade-off emerges between power efficiency and solution accuracy-greater power reductions can be achieved at the cost of decreasing solution accuracy.
AI Applications at Johns Hopkins APL
Johns Hopkins APL is making significant advances in AI across various domains, ensuring the technology's capabilities while addressing its weaknesses. A collaborative community of AI researchers and applied scientists works in areas ranging from undersea to outer space, incorporating autonomy, computer vision, machine learning, and agentic AI. The Intelligent Systems Center (ISC) leverages APL’s broad expertise to advance the employment of intelligent systems for critical national challenges.
Read also: Revolutionizing Remote Monitoring
Collaborative Robotics with AI Agents
APL and Microsoft have demonstrated an AI agent that can coordinate heterogeneous robot teams using large language models. This highlights how APL’s expertise in autonomy software and Microsoft’s scalable cloud tools are advancing the future of collaborative robotics.
Robotic Arm for Maritime Repair and Manufacturing
APL has installed a state-of-the-art robotic arm to advance repair and manufacturing for the maritime industrial base. Using wire arc additive manufacturing and advanced sensing technologies, the system enables faster, more precise repairs of oversized components, strengthening fleet readiness and reducing costs.
Machine Learning for Additive Manufacturing Microstructure Prediction
APL’s machine learning method successfully predicted the impact of cooling rate and temperature gradient on grain orientation and size during laser powder bed fusion (LPBF). This approach promises to dramatically reduce the time and cost of developing materials with tailored physical properties and will soon be implemented on a NASA-funded effort focused on creation of a digital twin.
Physics-Based Computational Modeling and Simulation
Researchers developed a technique to predict the microstructure formed during a single laser pass over a certain volume of powder via physics-based computational modeling and simulation. A computational fluid dynamics (CFD) model quantifies changes in temperature and cooling rates during the printing process, which are then used as input into another model that predicts microstructural formation.
Diffusion Probabilistic Field Model
To reduce the need for costly simulations, a machine learning model-a diffusion probabilistic field model-was developed. This model generates images based on cooling rate and temperature gradient, approximating simulations with enough accuracy to accelerate the process of developing additively manufactured materials using LPBF.
Read also: Boosting Algorithms Explained
Lifelong Learning Agents
Engineers and scientists in APL’s Intelligent Systems Center are developing systems that continuously improve in the field while applying previously acquired proficiencies to perceive, decide, act, and team in new situations.
MetaArcade and L2Explorer
MetaArcade is a highly parameterized environment for constructing custom 2D arcade games with controlled variation of perceptual features, actions, and rewards. L2Explorer is a Unity-based first-person-view 3D exploration environment that can be configured on the fly to generate a range of tasks and task variants that can be structured into complex and evolving evaluation curricula.
RLBlocks, TELLA, CLAMP, and L2Metrics
RLBlocks is a framework of building blocks for lifelong learning agents. TELLA is a framework for Training and Evaluating Lifelong Learning Agents. CLAMP is an algorithm to identify the capabilities of a learning agent using its performance data. L2Metrics is a Python library for calculating lifelong learning metrics from performance data.
Robustness to Visual and Task Variations
Artificial agents often struggle when asked to identify objects under different visual conditions, act upon input observations that are superficially different, or adapt to significant changes to their given task. Research is ongoing to address these challenges and improve the robustness of AI systems.
tags: #APL #machine #learning #applications

