Auditing AI

Where to Start and What You Should Know

Learn more blue down arrow

What We're All About

Cyber Insecurity was designed to be the central place for the internet's highest quality, most real cybersecurity content. We want to break down the mysteries in our industry. We’re not here to be the only content creators, but we are the most experienced, with the most effective information in our industry, regardless of discipline.

The Cyber Insecurity mission is to ensure that accurate, actionable, and valuable information about all things cybersecurity—news, careers, and strategies—is available to anyone, regardless of discipline, years in the industry, or knowledge level.

Learn more orange down arrow

Checkout the Course

Experience the best and only credible AI auditing and assessing course material on the internet, coupled with deep research. This course, led by Jax Scott, is meticulously designed to equip you with the skills needed to audit AI systems effectively.

Check it Out

We believe in the power of knowledge sharing. That’s why we’re offering you free access to the content used in the research for the creation of our AI Auditing Course. Dive into these resources to expand your knowledge and stay ahead in the field of AI auditing.

Learn more orange down arrow

References

General References

ABOUT ML Reference Document

AI Assurance Audit of FakeFinder, an Open-Source Deepfake Detection Tool

The Algorithm Audit: Scoring the Algorithms That Score Us

Algorithmic Auditing and Social Justice: Lessons from the History of Audit Studies

Algorithmic Fairness: Choices, Assumptions, and Definitions

Algorithms for Decision Making

Attention and Augmented Recurrent Neural Networks

Bias Testing for Generalized Machine Learning Application

Biases in Large Language Models: Origins, Inventory, and Discussions

Black Loans Matter: Fighting Bias for AI Fairness in Lending

Catalog of Bias

The Conversation - Biased Algorithms: Here's a More Radical Approach to Creating Fairness

Decolonial AI: Decolonial Theory as Sociotechnical Foresight in Artificial Intelligence

Deep Learning Ideas That Have Stood the Test of Time

Dive into Deep Learning

Diversity in Sociotechnical Machine Learning Systems

The Effects of Reward Misspecification: Mapping and Mitigating Misaligned Models

Examining the Black Box: Tools for assessing algorithmic systems

Extracting Private Data from a Neural Network

Fairness and Machine Learning: Limitations and Opportunities Book

Fairness and Machine Learning: Limitations and Opportunities PDF

GAO - Artificial Intelligence: An Accountability Framework for Federal Agencies and Other Entities

Gender Shades

Google DeepMind Reinforcement Learning Series 2021

Google Machine Learning Crash Course with TensorFlow APIs

Google People and AI Research

HELM: A Holistic Framework for Evaluating Foundation Models

H.R.2231 - Algorithmic Accountability Act of 2019

How to Build Accountability into Your AI

ICO - Explaining Decisions Made with AI

ICO - Guidance on the AI Auditing Framework: Draft guidance for consultation

The Illustrated Transformer

Machine Learning for High-Risk Applications: Approaches to Responsible AI

NIST Special Publication 1270: Towards a Standard for Identifying and Managing Bias in Artificial Intelligence

NIST Special Publication 1270 (DRAFT): A Proposal for Identifying and Managing Bias in Artificial Intelligence

OECD: Recommendation of the Council on Artificial Intelligence

Prompt Engineering Guide

ProPublica: How We Analyzed the COMPAS Recidivism Algorithm

ProPublica: Machine Bias

Qiskit Course - Machine Learning Course

Real-World Strategies for Model Debugging

The Seductive Diversion of 'Solving' Bias in Artificial Intelligence

Socially Responsible AI Algorithms: Issues, Purposes, and Challenges

Stanford CS221: Artificial Intelligence: Principles and Techniques

StereoSet: Measuring Stereotypical Bias in Pretrained Language Models

A Survey on Bias and Fairness in Machine Learning

TL;DS - 21 Fairness Definition and Their Politics

Training language models to follow instructions with human feedback

Tutorial #1: Bias and fairness in AI

UC Berkeley CS 294: Fairness in Machine Learning

UCLA NLP Tutorial: Bias and Fairness in Natural Language Processing

Upturn: An Examination of Hiring Algorithms, Equity, and Bias

What does "fairness" mean for machine learning systems?

What is AI Bias Mitigation, and How Can it Improve AI Fairness?

The Man Behind It All

NEAL BRIDGES

Neal Bridges, a veteran cybersecurity expert and influencer, has an impressive cybersecurity track record spanning the United States Air Force, NSA, FBI, and Department of Defense. His expertise in building state-sponsored hacker training units in the Air Force and leading red and purple teams for Fortune 100 companies has made him a formidable figure in the field. Neal has always wanted to give back to the cybersecurity community and have a one-stop shop for important resources out in the wild. That’s why he created Cyber Insecurity.

As a CISO and content creator, he reaches thousands of professionals through the Cyber Insecurity podcast and live stream. He is a respected voice in media on cybersecurity matters, with appearances on Bloomberg, CBS, WRAL News, and KVEO-TV.

Connect with Neal:

Meet Jax

Jax Scott

Jaclyn "Jax" Scott combines 18+ years of military service with deep expertise in cybersecurity. A seasoned Special Operations Warrant Officer, she's also a cybersecurity leader, engaging in electronic warfare and global cyber operations. As the founder of Outpost Gray and co-host of 2CyberChicks, Jax shares her knowledge widely. Her advocacy includes the "Jax Act" for female veterans. With a Master’s in Cybersecurity from Georgetown, her work spans NATO operations to cybersecurity education, emphasizing risk management, AI security, and penetration testing. Jax's contributions to cybersecurity and national security are both profound and educational.

Connect with JAX:

© XXXX CYBER INSECURITY, INC. ALL RIGHTS RESERVED 

PRIVACY POLICYCOOKIE POLICY