Stephanie Kelley is an Assistant Professor whose research focuses on the ethics of analytics and artificial intelligence (AI) in operations, particularly within the financial services industry. Stephanie aims to understand the causes and prevention methods for AI ethics challenges through her work and investigates both algorithmic initiatives and governance solutions for AI ethics. This includes AI ethics challenges such as bias and discrimination, explainability, privacy, and human autonomy. Her past work has been featured in Manufacturing & Service Operations Management, and the Journal of Business Ethics. In addition, her research has informed several AI ethics policies including those of the Monetary Authority of Singapore, Lululemon, and the Office of the Superintendent of Financial Institutions.
Currently, Stephanie is studying when, how, and why algorithmic ethics explanations matter when it comes to artificial intelligence, and how to design effective AI ethics audits. She is also interested in the use of analytics and AI for the promotion of equity, diversity, inclusion, and Indigenization; with a particular focus on improving gender equality in the field of artificial intelligence.
Beyond working with organizations directly, she will also work with several regulators to inform and develop AI ethics policies to balance innovation growth and prevention of ethical harms.
-
Kelley, S.; Ovchinnikov, A.; Hardoon, D. R.; Heinrich, A., 2022, "Antidiscrimination Laws, Artificial Intelligence, and Gender Bias: A Case Study in Nonmortgage Fintech Lending", M&SOM-Manufacturing and Service Operations Management
Abstract: Problem definition: We use a realistically large, publicly available data set from a global fintech lender to simulate the impact of different antidiscrimination laws and their corresponding data management and model-building regimes on gender-based discrimination in the nonmortgage fintech lending setting. Academic/practical relevance: Our paper extends the conceptual understanding of model-based discrimination from computer science to a realistic context that simulates the situations faced by fintech lenders in practice, where advanced machine learning (ML) techniques are used with high-dimensional, feature-rich, highly multicollinear data. We provide technically and legally permissible approaches for firms to reduce discrimination across different antidiscrimination regimes whilst managing profitability. Methodology: We train statistical and ML models on a large and realistically rich publicly available data set to simulate different antidiscrimination regimes and measure their impact on model quality and firm profitability. We use ML explainability techniques to understand the drivers of ML discrimination. Results: We find that regimes that prohibit the use of gender (like those in the United States) substantially increase discrimination and slightly decrease firm profitability. We observe that ML models are less discriminatory, of better predictive quality, and more profitable compared with traditional statistical models like logistic regression. Unlike omitted variable bias—which drives discrimination in statistical models—ML discrimination is driven by changes in the model training procedure, including feature engineering and feature selection, when gender is excluded. We observe that down sampling the training data to rebalance gender, gender-aware hyperparameter selection, and up sampling the training data to rebalance gender all reduce discrimination, with varying trade-offs in predictive quality and firm profitability. Probabilistic gender proxy modeling (imputing applicant gender) further reduces discrimination with negligible impact on predictive quality and a slight increase in firm profitability. Managerial implications: A rethink is required of the antidiscrimination laws, specifically with respect to the collection and use of protected attributes for ML models. Firms should be able to collect protected attributes to, at minimum, measure discrimination and ideally, take steps to reduce it. Increased data access should come with greater accountability for firms.
Link(s) to publication:
https://pubsonline.informs.org/doi/abs/10.1287/msom.2022.1108
http://dx.doi.org/10.1287/msom.2022.1108
-
Kelley, S., 2022, "Employee Perceptions of the Effective Adoption of AI Principles", Journal of Business Ethics, February 178(4): 871 - 893.
Abstract: This study examines employee perceptions on the effective adoption of artificial intelligence (AI) principles in their organizations. 49 interviews were conducted with employees of 24 organizations across 11 countries. Participants worked directly with AI across a range of positions, from junior data scientist to Chief Analytics Officer. The study found that there are eleven components that could impact the effective adoption of AI principles in organizations: communication, management support, training, an ethics office(r), a reporting mechanism, enforcement, measurement, accompanying technical processes, a sufficient technical infrastructure, organizational structure, and an interdisciplinary approach. The components are discussed in the context of business code adoption theory. The findings offer a first step in understanding potential methods for the effective adoption of AI principles in organizations.
Link(s) to publication:
https://link.springer.com/article/10.1007/s10551-022-05051-y#:~:text=The%20study%20found%20that%20there,processes%2C%20a%20sufficient%20technical%20infrastructure%2C
http://dx.doi.org/10.1007/s10551-022-05051-y
For more publications please see our Research Database