Beyond AI
How to program with AI without any programming knowledge?

Below is the transcript of a conversation held on the Beyond AI channel.
In this episode of Beyond AI, we talk to Sylwia Strzeboński-Gancarczyk, HR Manager at CPL, about the ethical aspects of artificial intelligence in the workplace. What is the future of AI in HR, and how should companies prepare for new regulations?
Watch this material on YouTube:
A conversation with Sylwia from CPL about the ethical aspects of AI in HR
Jan Twardowski: Hi! My name is Jan. Today, my guest is Sylwia from CPL. Hi, Sylwia!
Sylwia Strzeboński-Gancarczyk: Hi!
We have talked a lot on this channel about the possibilities of artificial intelligence. Today, we want to talk for a moment about the ethical aspects, but before we get to that, Sylwia, could you say what you actually do at CPL?
At CPL, I am an HR manager; I am responsible for matters related to employee care, broadly defined, across the entire scope, but also within our HR department, we are responsible for legal matters. This is a quite large part of our work—implementing various types of regulations, creating this legal framework for the business. This is the reason why, in part, AI ethics and legal solutions are interesting to me.
I would like to start from the side of the specific employee. That is, an "Average Joe" who encounters this AI and where various ethical aspects can take place. So, if you could first say: whether from the perspective of your company or other companies you know about, in what way do employees come into contact with this artificial intelligence? That is: where are companies implementing or integrating artificial intelligence in such a way that employees actually have contact with it?
For example, in our industry, what is interesting is that LinkedIn has already introduced various types of solutions based on AI. As a recruiter, you automatically have suggestions on how to adjust communication to a potential candidate. And these types of tools are already available, and recruiters can use them on an ongoing basis, and the ordinary user, in part, I assume, does not realize that the message you received from the recruiter was partially generated by AI already. And these types of examples are absolutely available in our industry too; as for our organization, we had to react quite quickly, for example, at the moment when the chat became trendy, because we were aware that our employees are very eager to use business novelties and wanted to help themselves in such daily work. Bearing in mind that we process personal data, as a recruitment agency, we had to ensure that this candidate data did not go outside. So quite quickly we had to react and tell employees that you can use the tools, but only at the moment when our internal CPL Lab approves a given tool, that it can be used internally in the organization. Because otherwise, the risk potentially increased regarding the fact that you might throw data somewhere that should not appear where it is.
Exactly, because various types of tools appeared, whether it's ChatGPT or more targeted ones. And indeed, it is impossible to avoid this, and we don't even want to avoid it, that our employees start using this. However, using these tools can cause certain issues, or matters of an ethical-legal nature. Well, and you mentioned one, this general one, that we don't want our data to go out somewhere. However, are there any others, or do you have observations regarding such ethical issues, or ethical problems, or legal ones, that appear at the moment when we start utilizing these solutions?
Quite clear instructions for employees are needed in this aspect. Because for example: you have to prepare a presentation—there are plenty of AI tools for this, but the information you throw into that given tool is already partially the property of the employer, and possibly a trade secret. So it is important that the employee is aware of what data they are throwing in and where they are throwing it, if they use these types of tools. If the organization already certifies some tools or considers them safe, then the employee should have quite clear guidelines in this aspect, what can be used, what can be thrown in at such a practical level.
We have the topic of intellectual property, and most of us sign, in quotation marks, the so-called "trash clause," as if transferring all rights to one's inventions and what I created in the organization to the employer, but on the other hand, if one produces some product, or produces something using tools that someone else perhaps invented, or where ChatGPT, I don't know, suggests something based on someone else's creativity, something is created, then where is the boundary that this thing which was produced is actually yours, and that the employer can even claim intellectual property rights to these products? Yes, so there are quite a lot of these levels on which challenges appear.
Sure, because on the one hand, returning to the first point you mentioned, it is so that we want to encourage our employees, because I think no one negates at this moment that these tools improve our efficiency. Thinking about the organization and its efficiency, we want people to use this. However, companies introduce certain internal regulations on how these tools should be used, yes, meaning not to place certain information in the chat, or it is also so that probably not everyone even realizes that depending on which model is used—because we have ChatGPT, but we also have other solutions, for example, Gemini from Google—well, there are also different policies of these companies regarding the issue of what happens to this data. Even ChatGPT itself, depending on the package we have, either gives a guarantee that it does not train the model on this data, or it does not give it. At the moment when we have the processing of personal data, or analyses of some CVs, well, we cannot do with it whatever we please. So I understand that these types of internal tips, or moving further, regulations exist. The company says: "What can be done, and what should not be done."
Yes. We try at our place to indeed give employees clear guidelines on what they can do, which tools they can use, and we also created a team that tests various tools. Before we implement them internally in the organization on a global level, we have a team that looks at what is currently appearing on the market and which of these tools we can use in such a way that they are safe, even at our local level.
We repeatedly receive offers, for example, for lead generation automation. These offers are different in terms of the security level of this data and the way these models function. So it is important that the organization is aware of how it must analyze these risks and how to implement various types of solutions.
Because I have the impression that it is often still the case that we are such "hurrah-optimists" and often test various cool, new applications for ourselves, but we don't quite have the awareness of how it works, what it can learn, and how to implement it effectively at our place in a safe way.
Well, super. It is what you are saying about this department, which probably can be found in many companies, and if not, it should be, meaning a department that actually looks at what the market is offering at the moment. Because the speed of changes that are happening, the speed of the appearance of new models, new tools, is enormous and it is difficult to control it. We cannot expect employees that, in addition to their normal work, they will spend another equal amount of time on being up to date with what is happening on the market.
Indeed, centralizing this in a certain place that can do it, which can give a certain recommendation as to what is worth it and what is not, and how to use it in a safe way, is great. Because of course, one must be careful with being a hurrah-optimist, but one must also be careful not to get too scared. Yes, these are still technologies that give us enormous possibilities. It must be done like most things, with a head. Well, but we should not be afraid of it and reject it, because it is something that is redefining the market a bit.
Are employees informed at all? You mentioned tools they have, which they can use, but is that easily accessible information? Meaning, is there in the company, whether yours or others as you observe, such a culture that this is something additional? If someone wants to, they can use it, or is there such pressure to use it because it is something that brings great value?
At our place, there rather isn't this pressure yet and rather we indeed try to help organize this world of these changes. How is it in other organizations? Here, depending on the organization. I assume there are different approaches, depending on how the company builds its market advantage. Yes, I assume that at a certain point it may push for the use of certain tools, in other situations not necessarily.
We rather start from the idea where the human is at the center, so we try to support the human with tools, indeed propose some solutions, say: "What can be..." It is cool that indeed more and more conscious companies are appearing on the market which, for example, publish their ethical codes or information on how they use AI. And if you look at some, even websites of global corporations, these types of codes of conduct precisely, or generally in the codes of conduct of these companies, this topic appears.
It starts to be interesting to the extent that companies notice that this can build a kind of market advantage, because it shows that we are aware of what we are doing, and we can more easily build the trust of our potential clients. Because we know what is happening on the market, we are keeping up with it. So from this perspective, such an implementation of regulations and showing others that these regulations are... I think it is an interesting trend.
Well, and it is such a fresh topic that indeed, if someone even shares it, it is probably a great support for others, because it is not a topic that we have already gone through many times and we know how to approach it. Only it is a bit like, say, 2 years have passed since it all exploded; it all has, well maybe not stabilized, because it is still very dynamic, but we have become a bit accustomed to it all and started thinking about it as something that will accompany us all the time and it must be put into certain frameworks, and say to ourselves: "Where are the boundaries."
You mentioned the issues of property rights, because indeed the employee signs a contract in which they transfer or sell the copyrights to what they have done. Only the question is whether they had the copyrights to what they have done, because if artificial intelligence did it, then here opinions are divided. It is an area that is not fully explored. Different entities claim differently and refer to who that owner is. Especially since it is not an easy topic.
It's not that the model generated something by itself. We also put our work into it, for example, by asking it appropriate questions, prompting. The question indeed is, at which point is our work? What do we no longer have rights to, can companies use it? There can be quite a few of these dilemmas. Polish companies are not the only ones that have them.
The European Union is an institution that was probably the first in the world to take up the topic, because what is happening in China, there are no regulations. In the States, I think the only thing is that the president issued certain kinds of tips on how to approach this in a safe or sustainable way.
In Europe, more is happening, as Europe likes to do, meaning regulations are appearing. We are talking now about something called the AI Act, meaning a set of regulations that are meant to define within which boundaries we can move. Meaning systems and solutions based on artificial intelligence, what they can and what they cannot do. This is something that is not working yet; it will work, it will be appearing over the years. Already a certain outline of these concepts, how it is supposed to work, exists.
The European Union distinguished four groups of these solutions:
The European Union indeed as the first is approaching this topic and trying to do something. Is that good?
It seems to me that it's good. Because we are building some frameworks within which businesses will be able to move. After all, we pride ourselves on propagating freedom as a continent and we give people free will and promote diversity and inclusiveness. In a situation where we compare to China, we could use profiling or segmentation. From the perspective of a citizen, I feel safer that there will be some tools that will be prohibited.
What seems important to me is probably such an awareness aspect. Comparing the implementation of this directive to GDPR (RODO), it seems to me that an increase in the awareness of such an ordinary "Average Joe" and generally users about how this document would work should happen. How it is linked with values.
Why we are doing it at all. You ask someone now, what is GDPR? Someone will tell you that it is very burdensome, something that does not allow you to do anything. Wherever you try to settle something, it is always: "We can't, because GDPR."
It would be good not to lead to the same with regulations regarding artificial intelligence.
Exactly, so that it is not such a simple tick that must be done and which will make us limit business freedom and the invention of various types of new, innovative tools. The awareness aspect is certainly important in this regard, so that ordinary users know what it is actually for.
This aspect regarding risk analysis seems sensible to me. We take into account how a given tool can affect society as a whole and to what extent that specific product is actually dangerous. So we do not close the road to innovation for ourselves, but try to secure some basic rights.
Exactly, I wanted to ask about that, don't you fear closing off this innovation? Because the idea is probably noble, to direct it toward the values that we want to use in Europe, however the world is changing and indeed has a chance to set itself anew when it comes to the economies of countries. We have China, which will have no scruples. We also have the States, which currently don't have them either.
Don't you fear that despite a good idea, a noble vision, it will be implemented in such a way that Europe will be left a bit behind others and these technologies will not be able to develop here, certain foreign businesses will not be able to operate in Europe due to these regulations? Meaning instead of adapting, they will simply withdraw from Europe and we will become a bit of a continent on which on the one hand little will be happening, on the other hand we will not even have access to tools that the rest of the world will have access to?
I believe that we are a large enough market that we are still interesting for the States or China, among other things due to the fact that Europe is technologically advanced. So we are attractive, potential users. In Poland, we will always find a way to effectively organize ourselves around the regulations that exist. So I think I'm not afraid for the Polish economy. If our rulers indeed in some efficient way, not too formal and burdensome, implement these regulations...
I like that expression: we will find an effective way to do something. Exactly, because it is supposed to be coming in over a longer period... At the moment something called the AI Act is being created, meaning even though the regulations are not yet in force, and the legislative process continues, one can voluntarily start meeting these criteria already. Do you see something like that already, do you see among companies that they are starting to adapt themselves to this idea that is being born somewhere there?
Adapt? I think not necessarily, more like taking up the topic at all, being interested. Some such preliminary, as we mentioned, codes of conduct or some principles are starting to appear. It is not yet so formal, rather these are such attempts, testing what we can allow ourselves. Where, in what way would we like to arrange ourselves as a business? What would we want to define as a business for us and in what way we conduct our activity. So more I think that companies are thinking about their own values, and possibly, how these values connect potentially with these new regulations, but I haven't seen yet at such a practical level that someone is already putting some specific regulations into practice.
Meaning awareness is more growing that some issue exists and probably it needs to be addressed in some way. Until now I think we were going as humanity, or as Europe, all the time on such a wave: "Well, there is something new, we must get to know it." And now we are starting to think a bit that okay, but just like with many other things, one must build this for oneself in some safe way.
A moment before we were talking about the issue of payments in Poland, and precisely the fact that we are not even fully aware that here it all proceeds very safely, precisely due to various regulations, where in other countries people are often afraid to use even cards, because there various things happen. In our country perhaps thanks to regulations, or perhaps thanks to something else, we have built for ourselves such a space in which no one is afraid to use it. It is probably one of the safer forms of payment in general.
Exactly, the problem is probably also difficult to the extent that the legislative process never kept up with what was happening in the world. It took a lot of time to define, for example, what an electric scooter is and what can be done on it, and where one can move. And yet, if we compare an electric scooter, which is something not very new, with something that we cannot describe, which is developing at a pace such that a year ago we did not think that something like this would happen, then I do not predict high chances for this legislation to keep up with the changing market. How do you see it?
It will probably indeed be difficult. I would compare it to the development of the automotive industry, because when the automotive industry was starting, there was also a "wild west" at the beginning, and only with time the working out of this business, how to build cars, how to make them safe, what the road rules are to be etc. So I think that here it will be the same, we will be a bit behind, but at a certain point we will reach such a level that we will be able to roughly regulate these principles in some way. I assume that specific regulations will always be a bit behind, but that's why awareness is so important. Because if we have conscious users and some basic values, principles by which we are guided, then these regulations no longer always have to be so up to date.
After all, our value as a human is that we have somewhere this concept of some ethics, some values by which we are guided, regardless of which culture we come from. So that is why it is so important to build awareness in societies, because it will make these regulations perhaps not necessarily as needed.

Well, to build this awareness, you probably don't need regulations, just indeed a certain approach and consistent action of many organizations and countries.
Well okay, and are we talking about the entire European Union, or is there something that you think should look like, or a certain regulation should take place, precisely here in our country, in this local market?
What I think is important in Poland is that we do not complicate the implementation of the AI Act too much, because we in Poland indeed have a tendency to take EU directives and implement them sometimes even more restrictively than that original EU directive was. So I assume that this is such an important point that we should have in mind, our rulers should have in mind, not to overdo the level of formalization and complication of certain types of procedures, because sometimes indeed such a tendency can be seen in our country. From my observations, Poles and Germans love formalities. Too high a level of these formalities can hold us back later.
Sure, because it's a bit like it's not about introducing regulations, but about achieving a certain goal for which these regulations were created. And indeed it is so that we are in such a moment, and we are such an economy as we are, that this is a great chance for us. That is, using this artificial intelligence can locate us in this new deal definitely higher. Because we have a certain dynamic of action, we have intellectual potential in the country that can act. There are even certain initiatives, and certain advisory boards for artificial intelligence are being formed. Therefore, it is indeed important not to spoil it, precisely by too excessive compliance, or the desire to meet certain regulations more than is necessary.
And I think this is a good moment to briefly summarize what we were talking about. Meaning we have a certain growing, somewhat bottom-up, awareness that artificial intelligence is a chance, but it must be approached, just like many other things, with a head and in a prudent way. Perhaps these boundaries must be outlined somewhere. We have on the horizon certain regulations, coming from above, from the European Union. They must be approached again sensibly and not overdone. However, I am still such an enthusiast of this technology, and I don't want to look at it through the prism of regulations; I treat it a bit as something that must be, as with every emerging technology or generally field and industry. However still, you probably also see a large increase in this value that these solutions generate.
Yes, absolutely. Artificial intelligence is a chance for us when it comes to development, and we cannot be afraid of it, but simply we must harness it in a sensible way.
Sure, and that is, I think, a very good sentence, optimistically ending the discussion. Thank you very much for the conversation.
_____
We invite you to visit the Beyond AI channel, which is dedicated to the dynamically developing world of artificial intelligence. On the channel, you will find the latest information, analyses, and discussions on the subject of AI — your guide to the dynamic world of AI.

A review of the most important AI milestones of 2024 – from the debut of Rabbit R1 to the launch of GPT-O1 Preview and the AI Act. An overview of major AI trends.

Will artificial intelligence slow down in 2025? An analysis of AI development forecasts, model naming trends, and innovations such as Gencast for weather prediction.