Do we handle data cleanly? Have we ever thought about the fact that the algorithms used in the application process can make the wrong decisions? HR managers have to keep an eye on pitfalls of the digital transformation.
Digitalisation is also unstoppable in human resources: The use of artificial intelligence (AI), the profound use of applicant and employee data, and the automation of processes are leading to new challenges in HR departments. This will change the HR working world and tasks. For example, HR professionals will also be responsible for the good collaboration between people and machines, and will pay attention to value-compliant algorithms and the ethical use of data in future. But tech knowledge is not one of their core competencies, so they will need support – either through external specialists or new IT colleagues.
AI is already being used in application processes today. The AI software can not only tailor job advertisements to the desired target group and place them appropriately, but it also extracts relevant information from applicants’ CVs, matches them with the job advertisement and thus makes an initial preselection for the recruiters.
If information for evaluating candidates is missing, HR managers can additionally use chatbots with AI that ask for this specifically. In addition, the chatbot can provide potential applicants with standard information about the company, the job advertisement and the application process, if required. The bot not only helps interested applicants, but it is also possible to use it for internal communication. In this case, it answers employees’ routine questions on personnel topics such as benefits, holiday or sick days.
This saves the HR team a lot of time. Because the AI takes over automated processes and administrative tasks. This allows HR staff to concentrate more on other important tasks. But the use of AI also involves dangers: The technology can make the wrong decisions, sort out suitable talent in the recruiting process or not communicate in a way that is in line with values and brands in its chatbot function.
An AI is only as good as the person who programs and trains it. The AI’s learning process is based solely on data provided by the software developer. If data only illuminates one aspect of the tasks, then the system learns the same bias thinking as humans. For example, if cultural and linguistic differences are not taken into account, an AI chatbot could communicate with applicants and employees in a discriminatory way. Furthermore, AI data always refers to the past. For example, if certain positions have never been filled by women or people with disabilities, it is possible that the AI will inhibit diversity in the company. Or if a company feeds the AI with utopian requirements, it would be no wonder that many candidates are automatically sorted out.
AlgorithmBias Auditors have an overview of all AI technology used in HR. They methodically examine all algorithms and monitor every element of the AI to ensure that it represents the company’s employer branding values and assists ethically at all touch points with applicants and employees. They are there to prevent errors of judgment, uncover biases in communication and decision-making.AI auditors also make sure to use large and diverse data sets when training the technology, which they constantly review.
In human resource management, more and more applicant and employee data is being stored and used. Digital recruiting and the digital personnel file are already a reality today. Cyber-bullying and data leaks can cause a lot of damage – to the company as well as to applicants and employees. Moreover, decisions about candidates and talents, their recruitment, further training and promotion increasingly depend on data. That is why HR professionals must not lose sight of the data issue around security and ethical responsibility.
After all, applicants and employees trust that their personal information will be handled fairly and in compliance with data protection laws. To ensure that this trust is not disappointed, companies need the function of an Ethical Data Manager. In addition to protection and security, they ensure above all that data is used anonymously, honestly, transparently and on an equal footing.
Collaboration between humans and machines is also moving into the focus of HR managers, as almost all areas of work are being digitised and automated. Teams of humans and machines will prevail in all companies. In order for this collaboration to prove effective, it should be consciously managed. The important skills of humans and machines must be brought together in a goal-oriented way. To do this, employees should first reduce their prejudices and fears of robots or AI software. Then it is a matter of getting to know and assessing the strengths of technology, such as accuracy, endurance, calculations or speed. Finally, they should be combined with the human abilities of creativity, differentiated perception, judgment, empathy and versatility in a promising way.
In order to optimally design this cooperation, a completely new way of working and thinking is needed. Therefore, the central task of the human machine teaming managers is the development of an interaction system through which humans and machines communicate their abilities, goals and intentions to each other. They also help to design efficient task planning for each affected work process and to successfully form hybrid teams. In the process, they identify processes that can be improved by newly available technologies.
Digitalisation is taking its toll: IT expertise should already be built up in HR departments around automation processes, AI deployments and sensitive data management –whether through further training of their own employees or through external consultants. In the medium or long term, dedicated part-time or full-time job positions can be created for this purpose. Because the functions of algorithm bias auditor, ethical data and human machine teaming manager will be needed in the future.