AIBlog

Ethical AI in the workplace: “AI works like a mirror”

3 theses for... Hans de Zwart. Chatbots, AI agents, and HR automation; AI is driving necessary changes in communication and HR. Many employees...

3 theses for… Hans de Zwart.

Chatbots, AI agents, and HR automation; AI is driving significant changes in communication and HR. Many employees view these developments with suspicion. With good reason? We asked Hans de Zwart. He's a researcher and lecturer at the Amsterdam University of Applied Sciences. As a philosopher, he focuses on the ethics and philosophy of technology. We presented him with three propositions.

IBM, a multinational technology company from America, already has its own "internal talent management system." This system uses AI to automatically connect employees with growth opportunities within the company. Other companies are experimenting with chatbots that connect new candidates with the right HR professional. And there are now also tools that companies can use to predict who will submit a resignation letter soon. Purely based on behavior they demonstrate on the work floor. There's a small ethical problem, though: this program only works if employees are unaware it's running. And there are plenty more ethical issues to consider. Let's see what Hans thinks about that.

1. Because of AI we lose the human touch in our work

In some domains, yes. For example, AI in HR often uses a specific type of technology: pattern recognition. We consider what kind of people were successful in the past and then train a model based on that to match new candidates. You then apply the probability calculation that applies to a group to an individual. That's problematic. And you can't avoid discrimination in this situation. As long as we live in a society where discrimination is a problem, you'll see that reflected in technology.

Another example is Amazon. They're using AI to work more productively. The tasks AI can't perform, for example in the warehouse, are still done by people. Only, at Amazon, those people are now treated like robots. This is a case where AI is going the wrong way. This is due to unequal power relations, a lack of labor rights, and a focus on maximizing profit. If we continue to democratize AI, things could also go in the right direction.

2. Employees without AI skills will quickly become redundant

I see this on a small scale at my university of applied sciences. There are also students there who don't want to use AI. Or simply don't know how to use it. They encounter the fact that their fellow students get better grades with much less effort. That's frustrating, of course. I wouldn't be surprised if this is exactly the same in work situations. At the City of Amsterdam, for example, employees are not allowed to use AI for privacy and security reasons. The employees who do secretly use it anyway end up having a communication plan in place more quickly. This creates tension among them. So you don't become completely redundant as an employee, but you do fall behind.

What I often see these days, including with my students, is that people use AI to deliver lower quality work in less time. While we're actually at a point where the combination of humans and AI yields the best results. If you use it correctly, of course. For example, I recently had AI write an abstract for my article. Such an abstract always has a fairly fixed format. I quickly discovered that AI did it faster and better than I could. That was quite confronting. But if you know how you can use AI and where your own added value lies, you can actually deliver faster and higher quality this way.

3. We will always lag behind the development of AI in terms of policy and regulations

No, I wouldn't say that. There's always a dynamic interplay between these things. AI didn't emerge in a vacuum. The AI ​​we have now is the result of how we've arranged things so far and the space and opportunities we've given it. Moreover, it's difficult to devise rules for something that doesn't yet exist. As far as I'm concerned, it's not a bad sign that our democracy and legislation are proceeding more slowly and thoroughly than technological innovation.

I think we should take more time to explicitly consider the use of AI. You want people who use it to apply a certain degree of critical thinking. How can I best use it? What do I gain from it in this case? Or rather: what do I lose? And if I lose something, can I compensate for that in another way? AI acts like a mirror. For example, it has shown us that our education is actually flawed. Students have become study robots, often focusing solely on grades. And that's a logical reaction to how our education is organized. With AI, they can now jump through that hoop without any difficulty. But you actually want a situation where students want to develop themselves out of curiosity and start thinking critically again. In this way, AI also forces us to rethink how we work.

Curious about how you can work effectively and responsibly with AI?

Then listen to this COMPOD episode about AI and communication.

Generate job description

Try our vacancy tool now and get free advice

To the vacancy tool

This will close in 22 seconds