Arti­fi­cial Intel­li­gence: new rules of the game for philanthropy

It simplifies processes, increases accessibility and provides essential Issues in focus: Artificial Intelligence (AI), first interim conclusion our Learning Journey, is significantly changing philanthropy. Scientific Conference in Geneva underlines this conclusion.

In Decem­ber, seven foun­ders, seven non-profits and one expert laun­ched the live track of our first AI Lear­ning Jour­ney. Inte­res­t­ingly, the non-profits alre­ady use the new tech­no­logy for every third appli­ca­tion, as we know from the survey in the Foundation’s Baro­me­ter. Funding orga­ni­sa­ti­ons are more scep­ti­cal. Above all, they hope for support in asses­sing previous requests and evalua­ting the reports. With good reason, because AI is far more relia­ble than a human being is in charge of “cross-reading”.

Perfect appli­ca­ti­ons 

Arti­fi­cial intel­li­gence refers to the ability of machi­nes to bring human-like intel­li­gence to the perfor­mance of tasks. Since Large Language Models (LLMs) such as ChatGPT have drama­ti­cally increased their perfor­mance, they are actually crea­ting a new start­ing point for many indus­tries, inclu­ding philanthropy. 

Because the new helpers are lingu­i­stic masters. If you provide them with speci­fic infor­ma­tion about your own project and about the funding orga­ni­sa­tion to be applied for, you will arrive at a perfect appli­ca­tion in just a few clicks. Language barriers are elimi­na­ted, the effort is redu­ced and the result is more compre­hen­si­ble and readable. 

The focus is on the quality of the project, its sustaina­bi­lity, resour­ces such as a team, sound finan­cing or good ancho­ring in the exis­ting ecosys­tem. Howe­ver, a “perfect appli­ca­tion” also crea­tes new chal­lenges. It is and remains only a “proxy” and compa­ra­ble to a job appli­ca­tion: proof of quali­fi­ca­ti­ons takes place on a day-to-day basis. For the appli­ca­tion process, this means that a perso­nal inter­view with a poten­tial project mana­ger or back­ground infor­ma­tion on the execu­ting orga­ni­sa­tion beco­mes more important. The same applies to the exch­ange and compa­ri­son with other reques­ted foundations.

Not ever­y­thing that is tech­ni­cally possi­ble makes sense.

The proof of the pudding is in the eating

We are deve­lo­ping speci­fic offers as part of the Jour­ney (see oppo­site page). We have also updated our tech­ni­cal infra­struc­ture toge­ther with PeakPrivacy.ch in response to the discus­sions. Whether you are a funder or a non-profit, you can now expe­ri­ence the power of AI for yours­elf. And in a secure envi­ron­ment. Important: Users decide for them­sel­ves whether or not their data is proces­sed with arti­fi­cial intel­li­gence, because not ever­y­thing that is tech­ni­cally possi­ble makes sense. In addi­tion to ethi­cal and ecolo­gi­cal issues (AI consu­mes a lot of energy), the risks of this promi­sing tech­no­logy must also be kept in mind.

The legally prescri­bed protec­tion of perso­nal data, for exam­ple, is deman­ding, as is the protec­tion of compe­ti­tively rele­vant and sensi­tive infor­ma­tion: This includes project dossiers and appli­ca­ti­ons, but also their assess­ment or the reasons for rejec­tion. The strength of arti­fi­cial intel­li­gence is also its weak­ness: it is constantly lear­ning. In other words, every ques­tion we ask also makes future answers poten­ti­ally more precise, so it is very temp­ting to use user data speci­fi­cally to improve the models. This can only be ruled out with ChatGPT in the paid version — and even then, trust is good, control is better. At StiftungSchweiz, we are ther­e­fore ushe­ring in a new chap­ter: As the first provi­der in phil­an­thropy, we are hosting arti­fi­cial intel­li­gence oursel­ves to ensure that the use of the tech­no­logy leaves no traces in the models used.

Scien­ti­fic support

The Lear­ning Jour­ney on arti­fi­cial intel­li­gence is accom­pa­nied by Lucía Gomez Teijeiro from the Geneva Centre en Phil­an­thro­pie (GCP). An exci­ting confe­rence was recently held at the Univer­sity of Geneva. It aimed to raise aware­ness of the role of phil­an­thropy in promo­ting an ethi­cal and inclu­sive approach to arti­fi­cial intel­li­gence from two perspec­ti­ves: “AI for phil­an­thropy” and “AI enab­led by philanthropy”.

There is a lot for inte­res­ted foun­da­ti­ons to do in both fields, although there are not yet very many of them. Socio­lo­gist Patri­cia Snell Herzog from India­na­po­lis has found around 300 orga­ni­sa­ti­ons world­wide: There are hopeful appli­ca­ti­ons in the area of climate change, she says, but in gene­ral we are still a long way from best prac­tice. Aline Kratz-Ulmer, a foun­da­tion expert from Zurich, also sees a long way to go. For Swiss grant-making foun­da­ti­ons, the first big step is to switch from analo­gue to digi­tal appli­ca­tion and grant management.

“Agency, not intelligence”

But is tech­no­logy the core compe­tence of foun­da­ti­ons? Of course not, Nelson Amaya Durán from the OECD is convin­ced, adding: “Phil­an­thopy runs on people and ideas — not on tech­no­logy”. And yet tech­no­logy is beco­ming incre­asingly important, not only but also in philanthropy.

Yale profes­sor and foun­der of the Digi­tal Ethics Centre Luciano Floridi puts it even more succinctly. Arti­fi­cial intel­li­gence only works if we largely adapt our systems to it. One exam­ple: it is conceiva­ble that one day we will be travel­ling exclu­si­vely in fully auto­no­mous vehic­les. But for this to happen, all roads would first have to be rebuilt and opti­mi­sed for AI.

Floridi goes even further, howe­ver, and funda­men­tally ques­ti­ons the “intel­li­gence” in AI. He freely inter­prets the acro­nym “AI” as “Agency, not Intel­li­gence”. For him, AI gives a tech­ni­cal system a far-reaching ability to act. In his under­stan­ding, it is a very powerful, auto­no­mous tech­no­logy in the broa­dest sense — but not a form of intelligence.

A plea for cooperation

Luciano Floridi reser­ves intel­li­gence for people. And espe­ci­ally to people in phil­an­thropy. Accor­ding to Floridi, phil­an­thropy is desi­gned for colla­bo­ra­tion and coope­ra­tion, and this is precis­ely where its power lies: because it stands outside of compe­ti­tion, it can maxi­mise its impact when the play­ers work toge­ther. In other words, it can achieve more when well networked.

So is AI, strip­ped of its intel­li­gence and disen­chan­ted, a comple­tely normal tech­no­logy compo­nent in our ever­y­day digi­tal lives? Sebas­tian Hallens­le­ben is one person who must know this. His current task is to deve­lop the stan­dards needed to imple­ment the EU’s new AI legis­la­tion. The chall­enge is of a funda­men­tal nature, says Hallens­le­bel, and consists of safe­guar­ding authen­ti­city and iden­tity in the digi­tal space. And Hallens­le­ben warns that while regu­la­tory frame­works are important, they are not enough to limit the poten­tial damage caused by AI. And as Fran­ce­sca Bosco from the Cyber­peace Insti­tute in Geneva impres­si­vely points out, these are mani­fold and their damage poten­tial should not be underestimated.

Ethi­cal arti­fi­cial intel­li­gence — and a lot of pragmatism

This is where phil­an­thropy comes into play again. It can signi­fi­cantly promote the ethi­cal — and that means prima­rily: respon­si­ble — use of AI. Many experts see such good prac­tice as the most effec­tive means of comba­ting the impro­per or dange­rous use of the new possibilities.

Accor­ding to Luciano Floridi, it is not only the misuse of AI that is unethi­cal, but also the exces­sive or wasteful use of AI, as well as the non-use of AI. It is both unethi­cal and uneco­no­mical not to allow civil society or social mino­ri­ties to parti­ci­pate in the new oppor­tu­ni­ties (he also counts young people or women, who are not usually at the centre of tech­no­logy trends, among the mino­ri­ties). Foun­da­ti­ons could ensure that AI is just as available in such areas as in others, for exam­ple by redu­cing oppor­tu­nity costs or contri­bu­ting to deve­lo­p­ment costs.

Prag­ma­tism is ther­e­fore called for. Nelson Amaya Durán also welco­mes this. Phil­an­thropy should gene­rally get off its high horse. It tradi­tio­nally does what the private sector igno­res and the autho­ri­ties can’t get their act toge­ther quickly enough — no more and no less. And that is actually a good thing and the right thing to do. One exam­ple of such a foun­da­tion is the Chicago-based McGo­vern Foun­da­tion — proba­bly the only foun­da­tion in the world that focus­ses exclu­si­vely on AI accor­ding to its purpose.

Howe­ver, this exam­ple in parti­cu­lar shows that there is still a long way to go. The foun­da­tion has just published its first civil society-focus­sed AI, an assistant for inves­ti­ga­tive jour­na­lism. Howe­ver, for arti­fi­cial intel­li­gence to become a tangi­ble field of appli­ca­tion for foun­da­ti­ons, many more such proto­ty­pes and case studies are needed that make the prac­ti­cal added value of the new possi­bi­li­ties tangi­ble. And this is exactly what the AI Lear­ning Jour­ney is all about (as well as two special boot camps for funders and nonpro­fits).

Your email address will not be published. Required fields are marked *

StiftungSchweiz is committed to enabling a modern philanthropy that unites and excites people and has maximum impact with minimal time and effort.

Follow StiftungSchweiz on