The latest publication from the “Sector 3.0” program: "Internal rules for the use of artificial intelligence. A guide for social organizations " is a free resource for those who want to use AI tools in their activities in an informed, safe, and ethical manner.
Artificial intelligence opens up enormous opportunities for NGOs – from improving communication and data analysis to administrative support. However, the lack of systemic rules for its use gives rise to legal, ethical, and organizational risks.
The growing interest in the use of artificial intelligence in the third sector goes hand in hand with concerns about security, law, and ethics. The study conducted by “Sector 3.0” and Klon/Jawor, published in July this year, shows that only 1% of organizations have implemented AI in a systematic way, while the majority operate intuitively and without structured rules. As a result, the inclusion of AI tools in the social sector may bring many opportunities, but just as many risks. Data leaks, lack of knowledge of legal regulations, and unfamiliarity with the specifics of how individual tools work are just some of the risks that organizations fear.
“This publication is our response to these challenges. Together with the experts invited to participate in the project, we have prepared a set of guidelines and recommendations to help organizational leaders introduce internal rules for the use of AI tools. On the one hand, such rules will protect the organization, and on the other, they will give the team a sense of security by clearly defining what is and what is not permitted,” says Dr. Dawid Szarański, director of the “Sector 3.0” program, under which the publication of the guide was initiated.
The guide consists of four parts, each of which addresses important aspects that cannot be overlooked in the context of using artificial intelligence-based tools: law, security, ethics and values, and data management.
With the help of tools such as checklists, rules, sample formulas with prompts, the e-book provides guidance for social organizations, addressing specific challenges, including how to develop guidelines for the use of AI in an organization, how to protect and anonymize confidential data, how to measure the effects of AI implementation in NGOs, and how to ensure consistency and values in the age of AI.
The authors of the publication have focused on practice – instead of dry clauses straight out of codes and regulations, they ask important questions, emphasizing the role of mindfulness and discussion, which each team in an organization can adapt to its mission and needs.
A team of experts responsible for preparing the e-book consists of: Milena Balcerzak and Aleksandra Maciejewicz (LAWMORE Law Firm), Ryszard Dałkowski (APN Promise S.A.), Katarzyna Drożdżal (Selkie Study, WUD Silesia), Wojciech Wilk and Dr. Dawid Szarański (“Sector 3.0”).
The guide is the third part of a series of publications on the use of artificial intelligence in non-governmental organizations, which have been made available over the last several months. Previously, the conclusions and recommendations from a series of specialized training sessions and a comprehensive research report, carried out together with the Badania Klon/Jawor team, were published.
The e-book “Internal rules for the use of artificial intelligence. A guide for social organizations” can now be downloaded free of charge from the “Sector 3.0” program website. The publication was produced as part of “Sector 3.0” program – an undertaking of Polish-American Freedom Foundation, managed by the Information Society Development Foundation.