Reflections on IAPP from a Chief Data Officer
It’s been almost a full month since I attended my first IAPP conference in Brussels, Belgium.
As the Chief Data Officer at BigID, I’m always immersed in conversations with privacy and compliance teams on how they are managing their sensitive and personal data. In addition, since the topic was focused on AI Governance (hey, AI is for everyone, not just lawyers! And I know a little something about governance too!), I wanted to attend to see how AI was being discussed outside of the United States and participate in the discussions with global peers.
Here are some of my key observations from the 2 day conference. And please note that I wished that I could have attended more sessions but the structure of the agenda only allowed me to fully participate in 3 to 4 sessions a day including the Opening/Closing General Sessions. I did however review all the available presentations offline so there will be a lot of reading on frameworks this summer.
- The Opening and Closing general sessions were dominated by the big tech companies such as Microsoft, Google and Meta. Understandably this was so as the sponsors for the conference, they held key spots for addressing the audience. They all spoke of their company’s role in AI, the untapped benefits to humanity, and the commitment to work with regulation across global regions. Brad Smith, President and Vice Chair at Microsoft, delivered an analogy on AI to prior technologies that came before it that revolutionized our society such as printing press and electricity. As we all nodded in appreciation and insights shared by the technology executives, I couldn’t help but refer back to Professor Shannon Vallor’s academic keynote on her book The AI Mirror: Reclaiming Our Humanity in an Age of Machine Thinking. While reflecting back on AI as a mirror, have we grown myopic to our views on AI benefits and regulations with the speakers and leaders that we surround ourselves with? As we look forward to the future in our AI mirrors, will it be the best reflection of humanity? These and many more ethical and humanity questions were discussed in Professor Vallor’s keynote and book.
- In this globally significant election year, the rapid uptake in use of Generative AI should be a cause of alarm for many in the government, citizens and media. One of the enlightening panel discussions for me was led by Karen Silverman, Founder and CEO of Centellus Group in the session entitled Wild Horses: Can We Tame Disinformation in 2024? The difference in the term “disinformation” and “misinformation” was clarified as the difference between malicious intent based on the proliferation of GenAI technologies to create deepfakes in videos, images and voice. The terminology was new for me – or rather I was not attuned to hearing the terminology in US-centric data conferences. However from a privacy protection perspective, this is an alarming topic that is the basis of AI trustworthiness. What exactly are organizations in the private and public sectors doing to educate and eliminate disinformation in the media? Here in the US, we have seen firsthand the negative effects that it has on our election process. Lastly, this session reminded me of Carnegie Mellon’s Block Center for Technology & Society who recently released this Responsible Voter’s Guide to GenAI and Political Campaigning. Check it out here and share it with your friends and family.
- Assessments versus Auditing of AI systems. It’s still fairly early to structure out the proper auditing of AI systems. Auditing is commonly applied to financial systems and processes where the procedures and data inputs are clearly identified and documented. While the same can be done on AI systems, the ISO/IEC 42001 requires organizations to conduct AI risk assessments to identify potential risks to society and users to determine the potential consequences of AI. Based on official guidelines, ISO/IEC 42001 is for: “Organizations of any size involved in developing, providing, or using AI-based products or services. It is applicable across all industries and relevant for public sector agencies as well as companies or non-profits.” Much of the discussions in Brussels focused on developers of AI systems but questions amongst the attendees were also raised on the responsibilities and protection for non-developers. These specific protocols are still not clear but apply to the majority of the smaller consumer enterprises. (Update: Since attending the IAPP AI Governance conference, I traveled around Switzerland to discuss similar topics with local customers and prospects. There was a difference in the behavior towards EU AI Act, mainly that this was still an academic and legal exercise that needed to be tested out before fully accepted. However with more discussion on understanding the usage of AI systems, there is still more education that needs to happen which leads to my last and final point below)
- Lastly, the maturity and readiness of organizations to embrace AI regulations is still on the low side. Conversations with General Counsels and Data Protection Officers were focused on the basic issues of identifying and measuring bias on a conceptual level. While the availability of AI Governance tools and consultants were plentiful in Brussels, it seems that technology vendors are still limited by the education and upskilling of the workforce to properly implement AI Governance programs. Fundamentally, the scaling of AI governance programs is still necessitated on the foundations of data – including metadata management, data management and data governance – to help with the automated identification and controls of data. Currently the majority of data related roles are technical in nature – specifically in data engineering or data analytics. There is still a lack of education on the business side of data governance and data management that is fully transferable to AI Governance.
- The biggest challenges facing all the participants last month rests on our shoulders as we return to our desk. What is next? How do we operationalize AI Governance? The highest level of the EU AI Act in all of its current format is still being finalized. The concepts of protection, frameworks on AI and even the benefits of GenAI are solid steps towards a consensus and understanding amongst the audience. There was certainly a clear step in the right direction since the inaugural AI Governance conference. The next step is on the HOW in terms of what we should all be doing next. How do we operationalize and continue the preparedness of data?
TheAssociation.AI is leading the way in terms of uniting the community and empowering members to interpret new policies, leverage on data processes and collaborate across privacy, security and technology teams. We are also tested with handling compliance regulations in the past and proven to be innovative in nature. AI as a mirror should be a rearview look as to the lessons that we learned from GDPR. As a data professional, more companies are now aware of the importance of sensitive data management. The future of AI Governance should continue to be a collaborative effort amongst privacy, security and data professionals working on the governance and control alongst the business partners to maximize the benefits and value of GenAI.