As you begin making your AI solution a reality, the best way to ensure effective and responsible implementation is to put some data governance guardrails in place. From people to processes to technology, data governance helps define how data and AI systems are managed throughout their lifecycle. In this module, you’ll be asking questions like: “Who is responsible for maintaining data after it’s collected?”, ”What’s our policy around who can access data and what permissions they should have?”, “What processes can help ensure strong data quality and reduce the misuse of data?”—and so much more.
Strong data governance builds accountability and trust amongst teams and organizations, which ultimately results in improved data quality, better decision making, and more responsible AI solutions.
PJMF has not developed content for this module yet, but let us know you’re interested, and we’ll keep you in the loop if/when it’s released.
<aside>
How do we go about developing an effective AI model, and once it’s refined how do we deploy it to the world safely?
</aside>
<aside>
From user testing to rapid prototyping, what product development practices will support a successful integration of AI?
</aside>
<aside>
What is our organizational-wide strategy for how AI should be leveraged to further our mission and how will we get there?
</aside>
The Patrick J. McGovern Foundation (PJMF) is a global, 21st century philanthropy committed to bridging the frontiers of artificial intelligence, data science, and social impact.