Interview with Dympna O'Sullivan: AI Policy and Gender Diversity in IT
Dympna O’Sullivan is the Academic Lead for Digital Futures Research Hub at TU Dublin. Prior to joining TU Dublin, she worked as a lecturer in Computer Science at Aston University in Birmingham and as a Senior Lecturer in Health Informatics at City, University of London. She completed her post-doctoral work with the Mobile Emergency Triage research group at the University of Ottawa. She holds a BSc and PhD in Computer Science from University College Dublin.
Her research is in the area of Applied Social Computing and includes Computing Ethics. She investigates the societal impacts of emerging technologies, which includes artificial intelligence. algorithmic decision-making and the associated privacy, fairness, transparency and bias implications of AI. She develops methodologies for Explainable AI which aim to enhance the intelligibility of AI systems for end users. Her research is underpinned by governance and legislation to ensure Trustworthy AI.
Dympna O'Sullivan, Head of Research at TU Dublin
What are your thoughts on the EU AI Act? You mentioned you are very pro-regulation. Could you please elaborate on that?
Sure, yeah. We at institutions like TU Dublin are very supportive of the EU AI Act and advocate for more robust regulation of AI in general. In fact, we were among the first universities to incorporate content on AI ethics into our curricula. When we talk about AI ethics, we're addressing issues such as fairness, bias, discrimination, and accountability. While there are specific software engineering practices that can help mitigate these risks, regulation plays a crucial role in enforcement. We believe in a broad approach to education, focusing on how the software developed impacts end users. This broader perspective highlights the importance of regulation in mitigating risks associated with AI's impact on end users. There is always the argument that regulation stifles innovation, but we certainly don’t believe that. Plenty of high-risk fields such as pharmaceuticals have plenty of innovation despite regulation.
There is a perception that regulations like GDPR hinder innovation. How do you think companies can navigate this challenge while still complying with GDPR?
GDPR and the EU AI Act are intertwined, and the EU has recognized the need to better prepare companies for GDPR compliance. Something that’s coming with the EU AI Act, is that individual member states will establish organizations, known as sandboxes, to assist companies in preparing for compliance. These sandboxes bring together policymakers, government officials, academia, and industry to identify high-risk applications, determine documentation requirements, and facilitate compliance. This support is essential, especially for small and medium-sized enterprises (SMEs) that may lack the resources of larger companies to navigate complex regulations effectively. SMEs often take on this extra burden when they attempt to comply with legislation. When governments implement this kind of sweeping, widespread legislation, they need to provide active support as well, to assist companies with their compliance.
What is the TU’s perspective on generative AI in academia?
Generative AI is a hot topic in academia, with concerns primarily revolving around issues like plagiarism. I believe the situation calls for a balanced and nuanced approach. While some institutions have reacted by reverting to traditional exams, this approach is not optimal for student learning as they would only have one form of assessment. You can’t beat the tool and using plagiarism detection tools is just a game of cat and mouse. It’s not just about setting essays anymore. Instead, we should be more creative and explore assessment methods that adapt to the evolving landscape of AI technology. It’s about working with students to try and understand, as well as taking into account what types of coursework reflect the realities of the 21st century. A better approach involves collaborating with students to develop new assessment models to understand their progress.
Plagarism has anyways always existed and will continue to do so.
Exactly, exactly. This is just a new tool that allows people to do that.
Do you have any knowledge about Workday's AI initiatives?
Well, I certainly know that they're involved in AI policy. One of the reasons we’re very keen to collaborate with Workday is because they are leaders in responsible AI and are actively lobbying for AI regulation. As a global company, one of the issues they're particularly interested in understanding more about is the harmonization of AI legislation. There's the NIST framework in the US and the EU AI Act on the other side; some parts overlap, and some don't. But if you want to build trustworthy AI at scale, you need harmonization across legislative jurisdictions. So that's really important, I think, for multinational companies. Workday is also looking very much at bringing AI into their products and workforce upscaling. One of the things we're discussing is an AI Center of Excellence at TU Dublin. We aim to train and run it with Workday, discussing how AI can complement human abilities. We need to start quantifying that. What does it mean for an AI system to augment human capabilities? Is it as a tool, a process, or something else? If people have more time and are more creative, what will they do with it? Workday is really interested in this, and we discuss augmenting human capabilities. How do we quantify that in 3D? How do we measure the goodness of fit in the relationship between humans and AI?
TU Dublin has a TrailblazHER program which promotes diversity in tech. Can you provide some insights into this initiative?
Sure. Well, in terms of our efforts to involve more women in technology at the School of Computer Science, we've been working on this for quite some time. Before I delve into some of the initiatives we've undertaken, let me first discuss a program we launched several years ago. We received substantial funding from the Higher Education Authority in Ireland to address gender disparities in computer science. The primary aim was to explore ways to tackle this issue. Our initial step was to engage with our female students. We wanted to understand why they chose to pursue computer science, as well as why some of their peers did not. A recurring theme we encountered was a fear of mathematics. This highlights the need to encourage more women to pursue mathematical subjects.
Could you expand on that?
One anecdote that stands out is from a female student who recounted feeling overwhelmed on her first day in a computer science class where she was the only girl. Despite her initial apprehension, she stayed. The story just stuck with me. I found it terrifying to consider how many people in that situation had actually run away. Stories like these underscore the importance of creating a supportive network for women in technology. To address this, we've implemented various programs aimed at bringing female students together, particularly in computing and engineering. These programs include mentoring initiatives and outreach activities to schools, where we aim to dispel the misconception that working in tech is solely about programming.
Our TrailblazHER program focuses on three key components: school outreach, university mentoring, and support for women entrepreneurs. It's essential to continually prioritize diversity and inclusion efforts, recognizing that it's an ongoing process. Diverse student populations and workforces are not only more productive but also contribute to a richer learning and working environment. Additionally, we incorporate diversity awareness into our curriculum. For instance, in their third year, students collaborate on projects with a local community hospital serving individuals with intellectual disabilities. This hands-on experience fosters empathy and encourages students to consider diverse perspectives in technology development.
Thank you very much for the interview!