60NANB24D231
Project Grant
Overview
Grant Description
Title: CMU/NIST AI Measurement Science & Engineering Cooperative Research Center
Purpose: Establish the CMU/NIST AI Measurement Science & Engineering Cooperative Research Center (AIMSEC) to facilitate collaborative research and experimentation focused on advancing our national capability for test and evaluation of modern AI capabilities and tools.
There are three principal goals:
1. Advance modern AI risk management.
2. Validate evaluation approaches through stakeholder partnerships.
3. Translate assessment capabilities and methodologies to practice.
Research will focus on advancing measurement science for modern AI systems, including machine learning (ML) and generative AI (GENAI) systems, such as large language models (LLMs).
Activities to be performed: Create a practical taxonomy of threats against large language models (LLMs).
The proposed work includes developing techniques to reduce memorization and protect confidential information provably, even if it results in less exact outputs.
Develop new measures of robustness for ML models that better reflect a model’s ability to withstand attacks in practical scenarios.
Develop richer definitions of alignment, correctness, and intent to determine the suitability of generative AI systems for their intended uses.
Create a state-of-the-art suite of attacks against LLMs for testing new models.
These activities will be organized in three streams of effort aligned with NIST AI priorities:
- Validity, reliability, safety, privacy, and security
- Accountability, transparency, fairness, and explainability
- Generative AI evaluation
Expected outcomes: Development of AI system-level tooling, metrics, evaluation procedures, development processes, and best practices to ensure AI system builders consistently engineer safe AI systems.
Goals are to advance modern AI risk management, validate evaluation approaches through stakeholder partnerships, and translate assessment capabilities and methodologies to practice.
Research will focus on advancing measurement science for modern AI systems, including machine learning (ML) and generative AI (GENAI) systems, such as large language models (LLMs).
These efforts are expected to yield additional approaches to mitigation of weaknesses and vulnerabilities, privacy protection, a practical taxonomy of threats against LLMs, approaches to meet AI governance challenges, and new socio-technical definitions and metrics.
Intended beneficiaries: AI stakeholders, including government, industry, academia, and the general public, will benefit from this work of advancing capability for testing and evaluating modern AI capabilities and tools.
Subrecipient activities: There are no planned subawards.
Purpose: Establish the CMU/NIST AI Measurement Science & Engineering Cooperative Research Center (AIMSEC) to facilitate collaborative research and experimentation focused on advancing our national capability for test and evaluation of modern AI capabilities and tools.
There are three principal goals:
1. Advance modern AI risk management.
2. Validate evaluation approaches through stakeholder partnerships.
3. Translate assessment capabilities and methodologies to practice.
Research will focus on advancing measurement science for modern AI systems, including machine learning (ML) and generative AI (GENAI) systems, such as large language models (LLMs).
Activities to be performed: Create a practical taxonomy of threats against large language models (LLMs).
The proposed work includes developing techniques to reduce memorization and protect confidential information provably, even if it results in less exact outputs.
Develop new measures of robustness for ML models that better reflect a model’s ability to withstand attacks in practical scenarios.
Develop richer definitions of alignment, correctness, and intent to determine the suitability of generative AI systems for their intended uses.
Create a state-of-the-art suite of attacks against LLMs for testing new models.
These activities will be organized in three streams of effort aligned with NIST AI priorities:
- Validity, reliability, safety, privacy, and security
- Accountability, transparency, fairness, and explainability
- Generative AI evaluation
Expected outcomes: Development of AI system-level tooling, metrics, evaluation procedures, development processes, and best practices to ensure AI system builders consistently engineer safe AI systems.
Goals are to advance modern AI risk management, validate evaluation approaches through stakeholder partnerships, and translate assessment capabilities and methodologies to practice.
Research will focus on advancing measurement science for modern AI systems, including machine learning (ML) and generative AI (GENAI) systems, such as large language models (LLMs).
These efforts are expected to yield additional approaches to mitigation of weaknesses and vulnerabilities, privacy protection, a practical taxonomy of threats against LLMs, approaches to meet AI governance challenges, and new socio-technical definitions and metrics.
Intended beneficiaries: AI stakeholders, including government, industry, academia, and the general public, will benefit from this work of advancing capability for testing and evaluating modern AI capabilities and tools.
Subrecipient activities: There are no planned subawards.
Awardee
Funding Goals
TO SUPPORT ACTIVITIES THAT DEVELOP, EXPAND, STRENGTHEN, OR SUSTAIN NIST PARTNERSHIP PROGRAMS AND/OR SUPPORT THE CONDUCT OF RESEARCH OR A RECIPIENT'S PORTION OF COLLABORATIVE RESEARCH IN A VARIETY OF AREAS INCLUDING, BUT NOT LIMITED TO: METROLOGY; STANDARDS; NANOTECHNOLOGY; ARTIFICIAL INTELLIGENCE; ADVANCED COMMUNICATIONS; ADVANCED MANUFACTURING; PROMOTION OF U.S. INNOVATION AND INDUSTRIAL COMPETITIVENESS; MEASUREMENTS IN SCIENCES; NEUTRON RESEARCH; GREENHOUSE GAS MEASUREMENTS; AND ENHANCING COORDINATION OF THE U.S. STANDARDS SYSTEM WITH GOVERNMENT AND PRIVATE SECTOR ORGANIZATIONS.
Grant Program (CFDA)
Awarding / Funding Agency
Place of Performance
Pittsburgh,
Pennsylvania
15213-3815
United States
Geographic Scope
Single Zip Code
Carnegie Mellon University was awarded
Advanced AI Measurement Science Center for Modern AI Evaluation
Project Grant 60NANB24D231
worth $6,000,000
from the National Institute of Standards and Technology in October 2024 with work to be completed primarily in Pittsburgh Pennsylvania United States.
The grant
has a duration of 2 years and
was awarded through assistance program 11.609 Measurement and Engineering Research and Standards.
The Project Grant was awarded through grant opportunity Measurement Science and Engineering (MSE) Research Grant Programs.
Status
(Ongoing)
Last Modified 10/4/24
Period of Performance
10/1/24
Start Date
10/1/26
End Date
Funding Split
$6.0M
Federal Obligation
$0.0
Non-Federal Obligation
$6.0M
Total Obligated
Activity Timeline
Additional Detail
Award ID FAIN
60NANB24D231
SAI Number
60NANB24D231_0
Award ID URI
EXE
Awardee Classifications
Private Institution Of Higher Education
Awarding Office
1333ND DEPT OF COMMERCE NIST
Funding Office
1333ND DEPT OF COMMERCE NIST
Awardee UEI
U3NKNFLNQ613
Awardee CAGE
97668
Performance District
PA-12
Senators
Robert Casey
John Fetterman
John Fetterman
Modified: 10/4/24