The 2018 Assembly Projects

Projects of the 2018 Assembly Cohort

The 2018 Assembly cohort came together around the challenge of artificial intelligence and its governance. Over four months, they took part in a rigorous two week design thinking and team building sprint, participated in a spring term course -- Ethics of AI -- co-taught by Jonathan Zittrain and Joi Ito, and developed their projects throughout the three month development period.

Below, you can read about the diverse set of projects that the teams are developing. Ranging from projects focusing on helping cities and communities, like AI Policy Pulse and AI in the Loop, to a project tackling the problems starting at the dataset like the Nutrition Label for Datasets, to projects analyzing and commenting on current practices in AI such as equalAIs and Project Ordo. Finally, we have a project, f[AI]r Startups that takes the understanding that implementing AI ethically is difficult and is aiming to provide a service to other companies to help.

AI Policy Pulse | AI in the Loop | Nutrition Label for Datasets | equalAIs | Project Ordo | f[AI]r Startups

Website

The AI Policy Pulse is a playbook for cities looking to build or buy AI technology. It focuses on emerging questions, common challenges, and best practices to consider, via case studies from across North America.

We interviewed dozens of city builders, policymakers, and technologists as input for the project. Our insights include case studies and also highlight top questions cities should be asking themselves when considering AI or other predictive, automated decision making in a city space. The report will be published and available on our interactive website for city builders to use as a reference in their work in the coming weeks.

AI in the Loop

Website

When we think about AI algorithms, we tend to think about “human-in-the-loop” algorithms – ones that incorporate human reflections and human inputs in the system. With “ai-in-the-loop”, we propose an adjustment to that framing – AI inputs and reflections into human systems driven forward by those most impacted by systemic oppression.

Nutrition Label for Datasets

Website | Prototype | Github

Algorithms matter, and so does the data they’re trained on. One way to improve the accuracy and fairness of algorithms that determine everything from navigation directions to mortgage approvals, is to make it easier for practitioners to quickly assess the viability and fitness of datasets they intend to use to train their AI algorithms.

The Nutrition Label for Datasets project aims to drive higher data standards through the creation of a diagnostic label that highlights and summarizes important characteristics in a dataset. Similar to a nutrition label on food, our label aims to identify the key ingredients in a dataset including but not limited to metadata, diagnostic statistics, the data genealogy, and anomalous distributions. We provide a single place where developers can get a quick overview of the data before building a model, ideally raising awareness around the bias of the data within the context of ethical model building.

Our current prototype, built on the Dollars for Docs dataset from the Centers of Medicare and Medicaid (CMS) and generously made available to us by ProPublica, presents a number of modules that span qualitative and quantitative information. It also makes use of the probabilistic computing tool BayesDB. You can learn more about the work on our website or contact us at nutrition@media.mit.edu.

equalAIs

Website

Ubiquitous automated surveillance by the government, and private sector, is likely to have a chilling effect on protected free speech, association and religious activities and weaken the constitutional protections for citizens accused of crimes. We believe there should be more public discourse around what choices we want to make as individuals, and as a society, regarding how our data, images, and facial recognition will be used. We also believe more tools are needed to make, express, or enforce said choices. The team set out to look at the technical feasibility of creating adversarial attacks to overcome facial recognition. We studied a 'simple' approach and looked at its generalization across common platforms. Additionally, our solution supported a conversation about whether we want to, and should be able to, have a say in how our society is using biometric data, facial recognition, and surveillance tools.

Website

Project Ordö takes inspiration from history, biology, and artificial intelligence to help train autonomous vehicles (AVs) to be safe and welcoming for the members of each community they enter.

Anybody’s accident is everybody’s accident. We propose a framework for blameless accident reporting closely modeled after the field of Aviation where commercial flight accidents were brought to 0 in 2017.

We’ve created tools to help vehicles make more informed decisions by introducing the notions of ‘Common Sense’ and ‘Community Sense.’ This allows cities, civic organizations, activists and conscious citizens to contribute their knowledge and be active participants in the way the complex AI systems in each vehicle make decisions.

We propose a Driving School for AVs - a certification process that is agnostic from the technology and a new epistemological category of AI Sherpa workers who will utilize Project Ordo to help autonomous vehicles learn to drive responsibly.

Website

f[AI]r startups aims to educate founders, investors, mentors, and accelerators about how startups can and should build AI ethically from the earliest stages of product development, without significant cost or distraction from the company vision.

We know that small companies building AI may not have the resources to address ethical questions on their own. How can we help early-stage founders and product teams address ethics and social biases in their product development cycles early on? What would it take to make ethics, fairness, and accountability a core value among startups when developing new products?

To help make this a reality, f[AI]r startups hosts workshops, events, and resources for startups about building tech ethically. We are creating a community of AI practitioners and starting a movement to bring ethical AI to the forefront of the startup ecosystem.

Contact

Assembly is a project run out of the Berkman Klein Center for Internet & Society at Harvard University and the MIT Media Lab. The project is part of the Ethics and Governance of Artificial Intelligence Fund.

If you have any questions about the program or would like to get in touch, please email us at info@bkmla.org.