Ethics is crucial when interacting with others, shaping our behaviors and decisions. This principle extends to technology, where ethical considerations guide the development and use of innovations, explains founder of the House of Ethics, Katja Rausch.

Katja brings a global perspective to her work. She speaks five languages and has an extensive background as an independent ethics consultant, lecturer, and author.

With over 12 years of teaching experience at the prestigious Sorbonne University and Paris School of Business, she has been at the forefront of integrating ethical considerations into technology and business education.

In this episode of AImpactful video podcast, Katja shares the origins of the House of Ethics, a unique platform that merges diverse cultural and professional insights to tackle contemporary ethical issues. Katja delves into the critical importance of AI ethics, data integrity, and the human elements that should guide our technological innovations.

Join us in this episode to hear more about responsible AI usage and individual and collective ethical responsibilities.

Transcript of the AImpactful Vodcast

Branislava Lovre: Welcome to AImpactful. Today, we will talk about ethics and AI. Our guest is Katja Rausch, founder of House of Ethics. Welcome, Katja.

Katja Rausch: Thank you very much for your kind invitation. So, I’m very much looking forward to our exchange.

Branislava Lovre: Can you share with us the story behind the House of Ethics? How did it all start?

Katja Rausch: The House of Ethics is a post-pandemic baby. For many people around the world, a lot of things changed. I was previously a professor in Paris, teaching information systems and data ethics. After the pandemic, you know, after teaching for 15 years, I missed the ethical side of management teaching. It’s always very good about methods and systems, but you also need to know why you decide, how you decide, and what the purpose is. That was the reason behind the House of Ethics.

Why did we choose the name House of Ethics? At the time, everybody was into a laboratory of ethics, institute of ethics, academia of ethics. And I chose the very modest, House of Ethics. Because, you know, ethics is the rawest form of us. And where are we the most authentic? It’s at home. That’s where our values and our principles are safe and where we voice them in our own style. We are not formatted at home. And that’s why it was very important for me to call it House of Ethics, because everybody’s welcome. I wanted it to be pluricultural and multidisciplinary. So far, I think we have done rather well. People like it quite a lot because there is this inclusive side and the democratizing side of ethics, which I like a lot to share and talk about. That’s why I’m very happy today to talk with you.

Branislava Lovre: When someone visits your website, they will see that you emphasize not using AI for content creation. Why did you make this decision?

Katja Rausch: So, the decision was not a superficial decision; it was grounded. First of all, I have a technical background. I’ve been teaching information systems for 12 years. So, I know what databases are all about. I know what a relational database is. I know what data centers are and what data leaks are. I know about profiling, I know about slicing and dicing. So, I know about data.

When I saw the model transformer models, which is a very different kind of artificial intelligence, my first question was, where do they get the data from? When I researched this in November 2022, I saw that it was industry-scale data theft and IP violations. If something unethical does not sound good to me, I don’t use it. So, data was the first concern.

The second reflection was in regard to our business; we are the House of Ethics. These technologies are general-purpose technologies. Are they fit for critical thinking? Are they fit for judgments? Are they accurate for helping people make moral decisions? Our answer is no because the technology is not reliable for that.

We heard about hallucinations, people getting cited for things they never did, never wrote. Even though in the first months, everybody was jumping on generative AI, and they were bragging about using it. Now, they all retract and say it’s just not reliable. We need to find a fine-tuned version of it, with our proprietary data and confidentiality.

The third reason is simply the equilibrium. Right now, we see people using generative AI for everything, for writing. We see it on LinkedIn. You do not even need to write your post anymore. I am a firm believer in effort, in our brain, in the human brain, and in originality. Generative AI, especially what we see with OpenAI, with Bing, these are not search engines, these are response engines. They give you one response, and that’s it. I am against one-sided thinking. The critical thinking part for me is extremely important.

That was the reason why we, the contributors, don’t use it. We write in our words because I like personal stories of people. I just like it because we are so many out there, and everybody ticks with something else. So let’s keep that up.

Branislava Lovre: Your basic idea is to discuss complex topics in a simple way.

Katja Rausch: I think the harder or more complex the issues are, the easier or clearer your words and thinking have to be. If you write in an academic style or lawyers in their own style, administrative styles, the style overshadows the thinking because people who read it don’t really understand what is meant. For me, the reader comes first, the one I’m talking to. I really care about the person understanding what I want to say. That’s why I encourage everybody who is contributing to the House of Ethics to use their own words, their own styles, because then they are authentic and they make themselves extremely clear, not only with the words but also with their thinking methods.

Ethics is such an important part of daily life, not just business. You need to resonate, you need to make yourself understood. That’s why I think keeping your own style is more important than footnotes about quotes and showing off who you know and what you have read. Instead, say clearly what you think and help the other person to maybe share and develop or progress if they want to. That’s my view at least.

Branislava Lovre: We’re talking about AI and ethics. Could you explain what AI ethics means?

Katja Rausch: There is a difference between ethics and AI ethics because ethics, per se, is actionable philosophy. When we talk about AI ethics, it’s applied ethics to a field—artificial intelligence—like bioethics is applied ethics to biology. So, that’s the first important difference: you switch from philosophical, abstract thinking to a practical application within the discipline. What does that mean? It means that concepts like responsibility, fairness, and equity pop up because you make decisions. Once you act within your professional field, you have to make decisions; otherwise, you don’t move. That’s why AI ethics, or ethics applied to artificial intelligence, is the intersection of technology and humanities. It is so important because right now it impacts all of us, not just as a product, but as a system. And that’s important.

Branislava Lovre: The ethical use of data is one of the most important topics at the moment.

Katja Rausch: Well, data per se is raw. If you don’t use your brain, your data doesn’t tell you anything. You have to interpret your data, you have to analyze your data. That’s how you get information. A very famous marketing use case involved Walmart and the sale of diapers. They tried to find out why, on Thursdays, the diapers at Walmart were always sold out. The database just showed the figures, but the brain needed to interpret what was behind those figures. It turned out that those who went grocery shopping were men who always bought diapers and beer together because Fridays were football game days. This was a tendency they read through the data.

So, you have your raw data, you have to interpret it to get information. Eventually, after information, you will have your knowledge. It’s upon your knowledge that you will make decisions. Figures per se, not contextualized or interpreted, are useless. There would be no artificial intelligence without data. Artificial intelligence is based on big data, which is based on algorithms, and without that, we wouldn’t have all of this. So data is really the essence of all of this. We need to have a closer look at that. We cannot just say, “Oh, that’s okay.” No, we need quality data. We need integrity with data. We need transparency. We need explainability. That’s all we need.

Branislava Lovre: What are some of the main challenges in this field right now?

Katja Rausch: They tell us if you don’t do it right now, you’ve missed the train, and you will go out of business. So you will miss that only chance, or even for workers, if you don’t use it, your skills won’t suffice. Somebody who is not using generative AI won’t be there in the next ten years. Fear of missing out. So many people just do something which is totally useless sometimes and even counterproductive, at least not as productive as imagined. So, I think how to challenge that is by staying grounded. Just stay grounded. Just think, breathe, relax, and think again. Ask yourself, do we really need that? What is our objective for business? What do we want? What is our product? What does it lack? Can it be improved by generative AI, by normal AI, or by traditional science?

This is what you need to do. You need to use your brain more than ever. You need your brain more than ever. And you need your heart more than ever these days.

Branislava Lovre: What are the biggest concerns in looking into the future? What should we focus on?

Katja Rausch: I would say it’s three levels. Well, two levels—let’s just stick to two levels. It’s on the individual level and then on the micro and macro levels. On the micro individual level, we shouldn’t underestimate ourselves. We need to know what the qualities of our humanness are about and see AI as a tool, not as an alter ego or a super ego. That’s the first thing.

The second thing is on the macro level. My big concern is that tech stopped being tech; it became business and now politics. On the macro level, we need to ensure that we don’t get swept up by all of that. We need to stand firm and appreciate how fantastic it is to be human and to exchange ideas together, knowing that we are stronger and smarter than what we have created. We must avoid falling for the Pygmalion complex, where we fall in love with our creation and destroy ourselves afterwards.

Branislava Lovre: What are the ethical considerations that creators and users should keep in mind regarding the responsible use of AI?

Katja Rausch: Well, that’s a very important question. As users, what should we think about? First of all, we should think about the purpose. Do we need it? Why do we need it? Why do we use it? Because the implications are quite large, not just on the cognitive side, not just on the business side, but also on the ecological side. We know that using this kind of technology damages our planet irrevocably. So, do we really need to generate ten images per minute for fun? Just today I saw the DALLE Playbook. It’s a guide to DALLE generating and prompting, but do we really need that?

To make it clearer, individual responsibility is paramount. It’s not just about always pointing at the bad ones in Silicon Valley. We should also have a closer and more critical look at ourselves—individual responsibility. That’s the first thing.

I like the idea of, and that’s about ethics, actually, because ethics is okay; it’s about me, it’s my values. But ethics is also the interaction with others because, on an island alone, you don’t need ethics. You need ethics once you interact with people. So, the idea of, and we call it do ut des—you do for that I, for that you do too—which is a give and take. You should be clear about the reciprocity of what you do. It’s not just a single silent act. Most people don’t really think about it like that.

So, I think the interaction, the social responsibilities, and the individual responsibilities which lead us to dilemmas sometimes—that is, right now people say, “No, but you should use it, otherwise you are missing the wagon.” It’s the FOMO, the fear of missing out. But on my side, I say think about purpose first and then think about equilibrium. Is it really something that helps you to get where you want? Or do you really need it 100%? Or maybe you need it like 20% or 17%? Right now, everybody is just jumping into it, so maybe be grounded a little bit more and not that euphoric.

Branislava Lovre:  How can we improve in the field of responsible AI usage?

Katja Rausch: I think we should just try to be the best possible version of ourselves in whatever we do and be vigilant about those who don’t care much about others. Well, maybe that’s a message to help others and to be cautious about those who have self-serving ethical objectives.

Branislava Lovre: What are the plans for House of Ethics?

Katja Rausch: We at the House of Ethics have developed a novel approach to ethics called collective ethics, which is very different and is termed swarm ethics. It involves collective intelligence and a horizontal approach. That’s what I have been talking about throughout our interview. With this kind of novel approach, we are currently in discussion with a major university. I cannot yet say which one, but we aim to integrate our concept, which stands at the crossroads of anthropology, complex systems, and digital technology, into their syllabus as a novel approach to the dialogue between tech and humanities.

The next partnership we are likely to form is about empowering individuals in our high-tech society. The founder of this company, which is also nonprofit, is a high-level lawyer, an activist, and a professor at a university. Together, we will use swarm ethics as a use case in developing educational frameworks for empowering people. It’s quite a challenge.

Additionally, we have our project on cyber ethics and cybersecurity, but from a cyber intelligence approach, which is not just about privacy and the usual discussions. It also involves business intelligence, people intelligence, and of course, cyber intelligence, but in a more comprehensive manner to discuss shared grids of responsibilities.

That’s our latest project. We are working on it and are absolutely open to welcoming new associates to expand the scope of what we do. So far, we have about ten people who are highly interested in every country. This is something that’s moving along nicely. Of course, who knows what the future holds? We are always looking forward to it. The new developments are feeding us, and that’s how it should be.

Branislava Lovre: Thank you, Katja.

Katja Rausch: Thank you very much. It was a pleasure.

Branislava Lovre: You have watched another episode of AImpactful. See you next week.