Научная статья на тему 'ARTIFICIAL INTELLIGENCE IN SOCIAL SECURITY: OPPORTUNITIES AND CHALLENGES'

ARTIFICIAL INTELLIGENCE IN SOCIAL SECURITY: OPPORTUNITIES AND CHALLENGES Текст научной статьи по специальности «СМИ (медиа) и массовые коммуникации»

CC BY-NC-ND
660
134
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
ARTIFICIAL INTELLIGENCE / SOCIAL SECURITY / AUTOMATION / OPPORTUNITIES / CHALLENGES

Аннотация научной статьи по СМИ (медиа) и массовым коммуникациям, автор научной работы — Benouachane Hassan

Artificial intelligence (AI) is shaping up to be the transformative technology of our time and has become a powerful driver for social change. Social security institutions are progressively applying emerging technologies, including big data analysis, artificial intelligence, blockchain, and biometrics. The increasing use of AI by social security institutions is enabling more proactive and automated deliveries of social services. Although the potential of these technologies has not yet been fully tested nor explored, they are already providing relevant outcomes in key social security areas such as addressing error, evasion, and fraud, as well as developing effective approaches and automated solutions to customers’ concerns aimed at improving social services. The field of application of the technologies includes medical care, adaptive systems in robots carrying out dangerous activities at work, communication with insured people, and management of welfare benefits. However, the application of AI in the social sector also poses important challenges, prompting state institutions to consider how best to take full advantage of this new technology. Rapid introduction of automated technological solutions poses potential risks as well. This paper explores the various types of AI application and current and future uses of the AI in the field of social security, with a particular focus on strategies for governments as they consider implementing AI. It concludes that the use of AI in social security is both inevitable and potentially beneficial for all parties involved. It also is not necessarily either an unadulterated boon or bane but calls for careful planning and a comparative assessment of the benefits and challenges of AI versus human labour.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «ARTIFICIAL INTELLIGENCE IN SOCIAL SECURITY: OPPORTUNITIES AND CHALLENGES»

оо

THE JOURNAL OF SOCIAL POLICY STUDIES_

ЖУРНАЛ

ИССЛЕДОВАНИЙ СОЦИАЛЬНОЙ

ПОЛИТИКИ •••

Hassan Benouachane

ARTIFICIAL INTELLIGENCE IN SOCIAL SECURITY: OPPORTUNITIES AND CHALLENGES

Artificial intelligence (AI) is shaping up to be the transformative technology of our time and has become a powerful driver for social change. Social security institutions are progressively applying emerging technologies, including big data analysis, artificial intelligence, blockchain, and biometrics. The increasing use of AI by social security institutions is enabling more proactive and automated deliveries of social services. Although the potential of these technologies has not yet been fully tested nor explored, they are already providing relevant outcomes in key social security areas such as addressing error, evasion, and fraud, as well as developing effective approaches and automated solutions to customers' concerns aimed at improving social services. The field of application of the technologies includes medical care, adaptive systems in robots carrying out dangerous activities at work, communication with insured people, and management of welfare benefits. However, the application of AI in the social sector also poses important challenges, prompting state institutions to consider how best to take full advantage of this new technology. Rapid introduction of automated technological solutions poses potential risks as well. This paper explores the various types of AI application and current and future uses of the AI in the field of social security, with a particular focus on strategies for governments as they consider implementing AI. It concludes that the use of AI in social security is both inevitable and potentially beneficial for all parties involved. It also is not necessarily either an unadulterated boon or bane but calls for careful planning and a comparative assessment of the benefits and challenges of AI versus human labour.

Keywords: artificial intelligence, social security, automation, opportunities, challenges

DOI: 10.17323/727-0634-2022-20-3-407-418

Hassan Benouachane - PhD in Public Law and Political Science, Faculty of Law, Economics and Social Sciences Agdal, Mohamed V University, Rabat, Morocco; Member and researcher of the International Institute of Scientific Research, Marrakech, Morocco. Email : h.benouachane@gmail.com

© Журнал исследований социальной политики. Том 20. № 3

Fuelled by the power of data, the diverse set of artificial intelligence (AI) technologies are expected to play a positive role in making organizations perform more effectively and efficiently, enabling them to create powerful algorithms to act autonomously on behalf of humans and make decisions based on the data already collected. An ever-increasing number of social security agencies and organizations around the world have been quick to set up AI systems to harness the large volumes of data they manage to streamline processes, deliver customized services, support service users, handle various applications of social assistance, and formulate evidence-based decisions. Yet, implementing AI programmes successfully comes with many complex challenges that could undermine the potential for positive change. If not designed, monitored, and refined following the basic principles related to social policies such as equal rights and social justice, AI can generate potentially significant consequences that are undesirable for individuals, organizations, and societies. Such consequences have the potential to aggravate existing social issues by promoting inequality and discrimination and call into question a government's ability to protect and serve its citizens (Ananny 2016). Many government agencies, including the United Kingdom, Australia, and Norway, are pursuing opportunities to utilize AI without a clear understanding of its costs, benefits, and risks for users.

The starting point of this study is the increasing relevance of AI as well as its ground-breaking potential for the social sector on a global level, both in positive and negative terms. Several countries, in particular the Nordic countries, have recognized the great value of AI for social protection and have launched various cost-intensive AI initiatives, revealing the separate potential areas of application for this technology. However, no government has so far comprehensively addressed the whole AI application spectrum (Bernd et al. 2018). At the same time, its use introduces new risks and ethical challenges, such as biased data, fairness, and transparency. These concerns require social security organizations to anticipate potential unintended effects and put various safety measures in place to prevent them.

Since the deployment of AI is still in its infancy in the sector of social security and many applications have been employed as innovative pilot projects, public authorities and administrators of the social security administration may not be aware of the full range of AI application opportunities and related challenges. Furthermore, there is still little specific research on AI which fails to provide an integrated view of AI applications and challenges for the social security sector. Compared to the expanding debate on potential challenges of AI adoption in the social security sector, there is little to no empirical research to provide evidence-based guidelines for its governance. To fill this gap, this study seeks to develop a comprehensive understanding of AI, examining its applications and impact in the social security sector by addressing the following research questions:

• How is AI used in social security;

• What are the opportunities, challenges, and consequences of using it;

• What can social service organizations do to support ethical, accountable,

and inclusive automation in social security services?

The following section describes the methods used to collect and analyse the data for the study. Next, the article briefly describes AI and discusses its application in social security. Then, it analyses the most important opportunities of AI in social security and highlights the key challenges that social services face when routinizing such technologies. The last section reports the findings of this study and concludes with a discussion of the theoretical and practical implications of this work as well as some suggestions for future research.

Research methodology

We adopted a systematic review of the literature for this article. The systematic literature review is a process that provides a collection of relevant evidence on the given topic that fits the pre-specified eligibility criteria and to have an answer for the formulated research questions (Mengist et al. 2020). The steps to conducting a systematic review are defining the research questions, conducting a literature search, identifying relevant work, assessing the quality of studies, summarizing the evidence, and interpreting the findings. The aim of this present paper is to identify opportunities in the AI use for the social security field, taking into consideration the challenges they pose, and suggest actions for social services stakeholders in applying AI. To carry out a comprehensive literature search, we utilized three databases: the Web of Science (WoS), Google Scholar, and EBSCOhost to identify the documents that covered relevant topics (operationalised through keywords). To narrow down the search, we have selected countries such as Finland, the United Kingdom, and Switzerland that are considered as well-suited cases because they enjoy the highest degree of digitization of public bodies in the world. Therefore, these illustrative case studies are a good indicator of opportunities and challenges that other social security organizations may face around the world.

Most papers sought were produced from 2017 to 2022 and written in English. Only original articles were selected for analysis. The search results identified 86 publications. Among the 86 studies retrieved, we selected the most relevant with the following procedure. The search was based on content-related inclusion/ exclusion criteria. In the sample, only publications with content that directly answered the research questions of the present paper were selected, while subject-wise publications were excluded. Moreover, to ensure the high quality of publications in the sample, publications from peer-reviewed journals and substantial reports were considered. The inclusion/exclusion criteria were first applied to the studies' titles, keywords, and abstracts of publications, and then to the full texts of the publications. The articles that did not meet the inclusion criteria were excluded. Finally, a total number of 35 studies were set as eligible for inclusion. In the end, the data extraction involved collecting and coding information for each of the 35 studies.

Applying AI in social security

Social security institutions are progressively developing and implementing AI technology worldwide. . This is, for instance, the case in Sweden. Local authorities govern Swedish social services; 90 % of (what city?) its regions use AI in daily life and see the usage positively (Flanders Investment and Trade 2020). In another study, 78 % of municipalities are reported to use AI and perceive it as beneficial (Vinnova 2018). The Australian case is another illustrative example of the introduction of artificial intelligence into social security. In July 2016, Centrelink, the Australian Government's master programme that distributes social security payments to citizens, implemented the Online Compliance Intervention (OCI) programme, an automated debt-calculation and -collection scheme (Rinta-Kahila et al. 2022).

One of real-world examples that apply AI-based technologies in the social security system comprises an intelligent conversational assistant. In the case of Norwegian municipalities, the results of a study show that the most popular applications of AI for municipalities include intelligent interaction agents with citizens (28.9 %), real-time translations for meetings including speech-to-speech and speech-to-text (21.1 %), request processing, applications handling, and automatizing data entry with 15.7 % each (Mikalef et al. 2019). It is evident that some areas of potential AI application use are of increased interest for municipalities. First and foremost, the use of intelligence interaction with citizens in the form of conversational agents is regarded by respondents as a top priority to invest in the near future.

Social security institutions are making more use of AI-based software to improve online customer services through quality and availability in different branches and types of services. Many different types of conversational agents, including chatbots, have been developed, added to websites, services' apps, social media, or instant messaging services, and are accessible by telephone, mobile phones, computers, and many other digital platforms. Intelligent Chatbots using AI are a specific type of virtual assistant that can increasingly engage in natural conversations and build relationships with users. They can simulate human behaviour and are able to respond autonomously to users' inquiries (ISSA2020). This software is responsible for providing clients with automated and personalized services by not only answering the most frequently asked questions, but also by requesting information on the steps taken by customers, such as registering and applying for benefits.

At present, there is a great deal of interest in this type of technology, which is increasingly being adopted by social security institutions since it can be set up within a few months and perform at a reasonable cost/benefit ratio, enabling them to handle open-ended inquiries (ISSA 2021). This trend is apparent from the good practices and experiences reported by many Social Security organizations across the world. For instance, certain Switzerland's cantons use chatbots to simplify and support administrative communication. This is the case of the Social Insurance Institution of the Canton of St. Gallen, which uses this software to reduce the workload associated with requests for premium reductions. It is highly likely that chatbots will also be

used for services related to contributions to Old-Age and Survivors Insurance, Disability Insurance, and Income Compensation Insurance (Binder, Egli 2020).

Another way to use AI is by deploying algorithms that can serve as a support for decision-making. Many social security institutions around the world are actively working and experimenting with automatic decision-making (ADM) and machine learning. Algorithmically driven, automated decision-making (ADM) systems are already in use across the EU. For example, an ADM system like Systeem Risico Indicatie (SyRI) is used in the Netherlands to detect welfare fraud. Likewise, in 2010, the Slovenian government introduced the e-Sociala (e-social services) programme to optimize social transfers, such as social and unemployment benefits, child benefits, and subsidies that make up the welfare system, which is now controlled by AI, ADM and machine learning capabilities (Kucic2020). Similarly, Finland's social insurance institution known as Kela, which is responsible for settling some 15.5 billion euros of benefits annually under national social security programmes, has implemented an ADM (Robotic Process Automation - RPA) to process benefits claims. Now it is possible to apply for benefits online, and 73,5 % of applications to Kela were filed online in 2020, which is an increase from 2016 when 64 % applied online (Kela 2021). In other cases, a recent report from the Trelleborg Swedish municipality states that 85 %% of the digital applications for social assistance are handled at least partly by the RPA (information and calculation) and 30 %% are handled entirely by the RPA (Ranerup, Henriksen 2020). The Finnish Centre for Pensions also tested the machine learning algorithm on the centre's anonymous register data of 500,000 people, correctly predicting 78 %% of future retirees who were set to retire on a disability pension in two years (Theo 2018).

The potential and opportunities of AI in social security

While various deployments of new computation tools will help manage data better, institutions of all sizes are using automation to provide better services worldwide. Social security institutions are increasingly exploring new ways to harness the large volumes of data they manage to streamline processes, deliver customized services, reduce fraud and error, and formulate evidence-based policy decisions (ISSA 2022). There are enormous benefits that automation and AI technologies offer. Swedish municipalities, for example, test AI to improve efficiency and savings across a wide range of administrative tasks (Andreasson, Stende 2019). In this sense, process automation is typically used to simplify processing data, centralize information, and reduce the need for human interaction. For instance, the Fiji National Provident Fund (FNPF), saved 1.8 million Fiji dollars (FJD) through automation, and processed and paid 80 %% of applications within the committed five working days' turnaround time (ISSA 2022). AI also increases employee satisfaction, engagement, and productivity while reducing manual labour and replacing repetitive tasks. According to a survey conducted among employees on AI, 79 % of respondents believe that

AI will make their jobs more productive and enable them to work on simpler tasks or at a higher level (Omatu 2013).

The use of automation and similar technologies is rapidly increasing in the public sector to deliver social services and support administrative decisionmaking. In this sense, the so-called data-driven innovation (DDI) is enabling social security institutions to improve products, processes, and organizational methods (ISSA 2019). For example, Services Australia, the agency responsible for delivering social services and social security payments in Australia, has utilised data analytics to reliably assess claims via data-driven automation. The service has successfully automated over 31,000 claims for social security benefits in real time, without any staff intervention and saving time, which was redirected to support vulnerable customers and more complex cases (ISSA 2022).

The prevailing argument in applying AI technologies in such organizational settings is that it can enhance human decision-making and action (Davenport, Ronanki 2018). Swedish municipalities, for instance, are using RPA to help social workers make decisions on benefits for claimants. The software currently handles around one in three reapplications (Lind, Wallentin 2020). With RPA speeding up processing times and reducing costly errors, processing costs decline and per-employee output increases. For the Social Insurance Institution of Finland (Kela), the main reason for deploying RPA was to take over some routine tasks that would provide more time for the personnel to concentrate on complicated cases. Yearly, Kela makes some 19 million decisions, and of those around half a million are generated automatically without any involvement of persons through the use of IT (Vaananen 2021).

Numerous social security institutions across the globe are investing heavily in AI applications to optimize daily routines and improve online customer services in different branches and types of benefits. The opportunities of applying AI technologies include using chatbots to interact with citizens about procedures and other types of queries (Park2017). This software can simulate human behaviour and is able to respond autonomously to users' inquiries by effectively reducing service costs and simultaneously handling many customers (Adamopoulou, Moussiades 2020). For instance, the Argentina Superintendency of Occupational Risks launched a chatbot to provide a more rapid response to user requests, reduce the strain on its customer-service phone lines, and respond to questions about work injury benefits. Chatbots also offer information on how to sign up with an occupational risk insurance company and are capable of providing information on personal data (ISSA 2021).

A few statistics illustrate the potential of this AI-based software. In Sweden, 95 % of local authorities are currently running cost-reduction programmes that inevitably lead to the phased replacement of staff functions by AI and digital co-workers (O'Dwyer 2020). In Norway, the Virtual Agent (Frida) of the Labour and Social Welfare Administration (NAV) helped Norwegians access key social benefits during the outburst of the coronavirus pandemic in 2020. Frida responded

to more than 270,000 inquiries from concerned citizens that corresponded to the capacity of 220 service agents (Ringes 2020). Currently, most inquiries are handled completely by the chatbot with only one out of five getting transferred for a live chat with a service agent (Vassilakopoulou et al. 2022). In 2020, Kela completed about 3 million office and call centre-based customer service interactions. Additionally, there were 64.4 million logins to Kela's e-services, and 72 % of all benefit applications reviewed by Kela were filed online (Kela 2021).

Fraudulent claims cost governments billions. For instance, the case of Universal Credit fraud in the UK reached a record high of 13 % of all spending on the benefit, costing the taxpayer £ 5.6 (Buchanan 2022). In response, social security institutions apply discovery and profiling techniques for detecting evasion and complex fraud operations (ISSA 2019). For example, in Portugal, a centralized automated system deployed to deter fraud associated with medical prescriptions has reportedly reduced fraud by 80 % in a single year (Chiusi2020). In Australia, Centrelink deployed an automated Online Compliance Intervention (OCI) programme to detect and recover fraudulent benefits. The automated system has saved almost a billion Australian dollars to date (UNESCAP 75 2019).

Challenges and Risks of AI for Social security

Since the scope of social security activities is to a large extent similar worldwide, as well as the challenges that this system faces. Even barriers to its adoption are also likely to be the same in many countries and across multiple industries and cases. According to a study, 51 %> of business executives believe that AI transparency and ethics are important for their business. Moreover, 41 % of senior executives state that they have suspended the deployment of an AI tool because of a potential ethical issue (Capgemini Research Institute 2019). These ethical issues may include the interactions which resulted in outcomes that are unexplainable, unfair, and untransparent and/or biased against a certain group of users.

In fact, transparency and explicability of the AI application constitutes an important issue, especially regarding decisions which impact people and/or involve risks (ISSA 2019). Centrelink Online Compliance Intervention (OCI) is a good example of this issue. Centrelink's Debt Programme raises many of the concerns that arise in respect of ADM systems that unquestionably use AI. Thus, the Commonwealth Ombudsman published its investigation report (2017), which found there were issues with the transparency, usability, and fairness of the OCI system. Between November and January 2017, the ombudsman's office received 241 complaints about OCI debts. The office received 1,563 'approaches' about Centrelink matters, compared to 835 the month before the system was implemented, which constituted an 87 % increase in complaints (Nott 2017).

Social security institutions, among others, are struggling to resolve the challenges of ensuring compatibility with existing laws and with the difficulties in justifying the logic of ADM (Ruckenstein, Velkova 2019). For example, legislative

compliance of automated procedures is a major concern for Kela. A particular issue identified in the 2019 Automating Society report is that one of Kela's public concerns is the manner in which it communicates the results and the reasoning behind the decision-making process to citizens (AlgorithmWatch 2019). If it is difficult to explain how a simple algorithm works, how can we explain complex AI systems like machine learning, and their automated decisions in a way everyone understands? Although the AI diagnosis may be more accurate, a lack of explicability may lead to a lack of trust between the authorities and the populace. AI will only be successful if it is based on trust between its beneficiaries and citizens.

As mentioned before, Trelleborg is Sweden's front-runner in automating welfare distribution. An analysis of the system's source code brought little transparency, but revealed that the personal data of hundreds of applicants was accidentally made public (Lind, Wallentin 2020). This analysis showed that the publicised records contained personal data of citizens who previously had welfare-related contacts with the municipality. The names and social security numbers of approximately 250 people were visible for anyone who filed a Freedom of Information (FOI) request to see the system's code, as well as the subcontractors working on it. This case raises questions about privacy, data protection, and discrimination. AI applications based on machine learning need access to large amounts of data, but data subjects have limited rights over how their data are used (Veale et al. 2018).

In terms of the challenges that ADM systems present, the case of the Netherlands highlights major issues concerning the lack of transparency and privacy. In the so-called Systeem Risico Indicatie (SyRI) judgement, the District Court of The Hague found that this automated system for detecting welfare fraud was insufficiently transparent and contained insufficient safeguards to effectively protect the right to privacy (Appelman et al. 2021). As AI systems become ubiquitous, regulators need to think about developing rules to manage security and privacy concerns associated with the use of these new tools. While the new algorithmic tools promise more accurate and consistent decisions, their opacity creates deep accountability challenges. A crucial question will be how to subject such tools to meaningful accountability and ensure their adherence to legal norms of transparency, evidence-based decisions, and non-discrimination (Castelluccio 2020).

AI systems can interpret massive amounts of data from various sources to carry out a wide range of tasks. When the datasets and algorithms that AI rely on are incomplete or biased, they can lead to biased AI conclusions and reinforce gender and racial or ideological biases (Gopani2022). AI can also deepen inequalities by automating routine tasks. Another example of SyRI is using personal data to calculate the likelihood of someone to commit benefits fraud. Before being discontinued, it was heavily criticised for proactive targeting of vulnerable populations, leading to discrimination and stigmatization of persons with low income. In this sense, the District Court of The Hague explicitly recognised the 'risk that SyRI inadvertently creates links based on bias, such as a lower socio-economic status or an immigration background' (Rb Den Haag 2020).

Furthermore, the use of AI risks stigmatization, reinforcing existing stereotypes, social and cultural segregation and exclusion, and subverting individual choice and equal opportunities (Kritikos 2019). Access to digital services tends to rise with income, so the poorest are most likely to be data poor as well. AI solutions can also unintentionally harm the very people they are supposed to help. For example, it can discriminate against individuals who have no access to the data-generating technology that the AI system relies on, such as a mobile phone or because of the language of the software. As the chatbot at a Norwegian service NAV does not support the English language, it is a significant obstacle to using the channel (Jakovic, Chandrasegaram 2021).

There is still little research data on how AI is affecting the social security field. Therefore, it is hard to assess its actual impact. The development and use of AI systems should be guided by ethical principles that promote well-being and prosperity while protecting peoples' private data, and ensure fair treatment of human individuals, communities or groups. Among other things, social security institutions must obtain the personal data of individuals but cannot use them without their consent.

Conclusion

The study findings offer several practical implications and carry valuable advice for the social security field. The first reviewing exercise resulted in the first inventory of illustrative AI use cases, highlighting the variety of interests expressed by social security organizations to experiment with AI. Common AI typologies found in the current inventory include the use of Intelligent Conversational Assistant, the use of machine learning methods, and Automated Decision-Making systems. According to these examples, we can clearly see that AI can help enhance human tasks, automate many activities, serve as a decision-making aid, and detect fraud.

Against this background, the study provides an understanding of AI and a list of AI applications for managers in the social sector, including their opportunities, which can serve as a reference to assist with implementing AI initiatives and for potential AI projects in the field of social security. This field can greatly benefit from AI methods and tools. There is evidence that AI applications save processing time, reduce data error rates, work better at lower cost, and conduct a variety of data analyses. However, the application of AI and existing algorithms are not without challenges as it comes with a range of risks and ethical issues. The AI challenges mentioned in this study can serve as a reference point to avoid potential pitfalls when introducing its applications in the social care sector.

This paper reflects insights gathered from a diverse set of research suggesting that the AI's future rests in the ability to establish a balance between process automation and human control and to ensure that risks and benefits are distributed fairly, mainly in the area of social security. It must be noted that the rapid technological advancements and the new domains of application of AI

have introduced new challenges and opportunities. Study's findings offer several practical implications and offer valuable advice for the field of social security. When this technology is used appropriately, with proper care and considering the analysis of its impacts on people's lives, AI systems have great potential to improve the quality and efficiency of products and services. To achieve this, public managers must establish clear governance frameworks for transparency and accountability to promote fair algorithmic decisions by providing the foundation for obtaining recourse to meaningful explanations. Furthermore, sensitive aspects that raise concerns among citizens, such as AI safety, privacy, and trust could generally be addressed by measures that foster transparency.

In contrast, ethical challenges may be more difficult to address. They represent a long-term issue, which requires policy initiatives as well as the establishment of a clear set of rules to control and govern AI applications aimed at ensuring responsible management of these critical areas. Therefore, monitoring the adherence to these ethical guidelines will maximize the potential of AI while protecting stakeholders and users from the inherent risks associated with this technology. For example, the protection of data, personal information, and human values are among the most crucial factors for a human-centered approach. Overall, the successful future of AI requires social security organizations to rethink the current strategies and structures and adapt them in accordance with the prevailing challenges.

This study has several limitations that suggest future research opportunities. Methodologically, the systematic literature review had specific shortcomings. First, this review focuses on research within a specific period, excluding any research conducted before or after the inclusion criteria. Additionally, the current review excluded documents written in other languages, including Nordic. As such, some critical nuances were probably lost during the analysis. Lastly, this systematic review has exclusively focused on ethical issues or social dimensions, excluding technical, organizational, and legal concerns. Thus, future research must explore these concerns in detail. Speaking of future research, investigating AI opportunities and challenges in detail would have gone beyond the scope of this study, but may be of interest to improve our understanding of the dynamics of these opportunities and challenges. Furthermore, as all challenges are closely connected to benefits, future research could broaden the study's scope, examining and comparing all aspects in a common framework to better understand their relationships.

References

Adamopoulou E., Moussiades L. (2020) Chatbots: History, Technology, and Applications. Machine Learning with Applications, 2. Available at: https://www.sciencedirect.com/science/arti-cle/pii/S2666827020300062 pdf (accessed 27 May 2022).

Ananny M. (2016) Toward an Ethics of Algorithms: Convening, Observation, Probability, and Timeliness. Science, Technology, & Human Values, 41 (1): 93-117.

Andreasson U., Stende T. (2019) Nordic Municipalities' Work with Artificial Intelligence. Available at: https://norden.diva-portal.org/smash/get/diva2:1375500/FULLTEXT01.pdf (accessed 27 May 2022).

AlgorithmWatch (2019) Automating Society: Taking Stock of Automated Decision-Making in the EU. Available at: https://algorithmwatch.org/en/wp-content/uploads/2019/02/Automating_ Society_Report_2019.pdf (accessed 25 May 2022).

Appelman N., Ó Fathaigh R., van Hoboken J. V.J., (2021) Social Welfare, Risk Profiling and Fundamental Rights: The Case of SyRI in the Netherlands. Journal of Intellectual Property, Information Technology andE-Commerce Law, 12 (4): 257-271.

Bernd W., Wirtz B. W., Weyerer J. C., Geyer C. (2018) Artificial Intelligence and the Public Sector -Applications and Challenges. International Journal of Public Administration, 42 (7): 596-615. Binder N. B., Egli C. (2020) Research Chapter on Switzerland in Report Automating Society 2020. Available at: https://automatingsociety.algorithmwatch.org/wp-content/uploads/2020/12/ Automating-Society-Report-2020.pdf (accessed 05 may 2022).

Buchanan M. (2022) Universal Credit Fraud Costs Taxpayers More than £5 bn. Available at: https://www.bbc.com/news/uk-61591517 (accessed 02 May 2022).

Capgemini Research Institute (2019) Why Addressing Ethical Questions in AI will Benefit Organizations. Available at: https://www.capgemini.com/gb-en/wp-content/uploads/sites/5/2022/05/ Ethics-in-AI-Infographic_Web.pdf (accessed 11 May 2022).

Castelluccio M. (2020) Opening a Window on Government AI, Strategic Finance. Available at: https://sfmagazine.com/technotes/october-2020-opening-a-window-on-government-ai/ (accessed 17 May 2022).

ChiusiF. (2020) Automating Society Report 2020. Available at: https://automatingsociety.algo-rithmwatch.org/ (accessed 07 May 2022).

Commonwealth Ombudsman (2017) Centrelink's Automated Debt Raising and Recovery System, Report No. 02/2017. Available at: http://www.ombudsman.gov.au/__data/assets/pdf_

file/0022/43528/Report-Centrelinks-automated-debt-raising-and-recovery-system-April-2017.pdf (accessed 24 May 2022).

Davenport T. H., Ronanki R. (2018) Artificial Intelligence for the Real World. Harvard Business Review, (96): 108-116.

Flanders Investment and Trade (2020) Artificial Intelligence in Sweden. Available at: https://www. flandersinvestmentandtrade.com/export/sites/trade/files/market_studies/2020-AI%20mar-ket%20 study-SE. pdf (accessed 20 March 2022).

Gopani A. (2022) Rakuten's Lata Iyer on AI for Human Empowerment. Available at: https://analyt-icsindiamag.com/rakutens-lata-iyer-on-ai-for-human-empowerment/ (accessed 11 March 2022). ISSA (2019) Applying Emerging Technologies in Social Security Summary Report 2017-2019. Available at: https://assets.cdn.sap.com/sapcom/docs/2020/06/c87c28a2-9a7d-0010-87a3-c30d-e2ffd8ff.pdf (accessed 16 March 2022).

ISSA (2020) Artificial Intelligence in Social Security: Background and Experiences. Available at: https://ww1.issa.int/analysis/artificial-intelligence-social-security-background-and-experiences (accessed 18 March 2022).

ISSA (2021) The Application ofChatbots in Social Security: Experiences from Latin America. Available at: https://ww1.issa.int/analysis/application-chatbots-social-security-experiences-latin-america (accessed 10 April 2022).

ISSA (2022) Data-Driven Innovation in Social Security: Good Practices from Asia and the Pacific. Available at: https://ww1.issa.int/analysis/data-driven-innovation-social-security-good-practices-asia-and-pacific (accessed 18 April 2022).

Jakovic D., Chandrasegaram G. (2021) Chatbot as a Channel in Government Service Delivery. Oslo: Norwegian University of Science and Technology.

Kela (2021) Kelan vuosi 2020 [Kela's Annual Report 2020]. Available at: https://www.kela.fi/ documents/10180/17802081/Kelan+vuosi+2020.pdf/0e40794f-3a1c-4d13-9d40-a8661c434f00 (accessed 01 June 2022).

Kritikos M. (2019) Artificial Intelligence ante Portas: Legal & Ethical Reflections, European Parliamentary Research Service, European Union. Available at: https://www.europarl.europa.eu/Reg-Data/etudes/BRIE/2019/634427/EPRS_BRI(2019)634427_EN.pdf (accessed 24 April 2022).

Kucic L. J. (2020) Journalistic Story and the Research Chapter on Slovenia in Automating Society Report 2020. Available at: https://automatingsociety.algorithmwatch.org/wp-content/up-loads/2020/12/Automating-Society-Report-2020.pdf (accessed 27 may 2022). Lind K., Wallentin L. (2020) Central Authorities Slow to React as Sweden's Cities Embrace Automation ofWelfareManagement. Available at: https://algorithmwatch.org/en/trelleborg-sweden-algorithm/ (accessed 02 June 2022).

Mengist W., Soromessa T., G Legese G. (2020) Method for Conducting Systematic Literature Review and Meta-analysis for Environmental Science Research. MethodsX, (7): 100777. Mikalef P., Fj0rtoft S. O., Torvatn H. Y. (2019) Artificial Intelligence in the Public Sector: A Study of Challenges and Opportunities for Norwegian Municipalities. In: I. O. Pappas, P. Mikalef, Y. K. Dwivedi, L. Jaccheri, J. Krogstie, M. Mantymaki (eds.) Digital Transformation for a Sustainable Society in the 21st Century. Cham: Springer: 267-277.

Nott G. (2017) Ombudsman: Centrelink OCI Lacking Usability and Transparency. Computerworld. Available at: https://www.computerworld.com/article/3476375/ombudsman-centrelink-oci-lacking-usability-and-transparency.html (accessed 10 June 2022).

O'Dwyer G. (2020) Swedish Municipalities Test Ai to Drive Efficiencies and Cost Savings. Available at: https://www.computerweekly.com/news/252480145/Swedish-municipalities-test-AI-to-drive-efficiencies-and-cost-savings (accessed 13 June 2022).

Omatu S. (2013) Distributed Computing and Artificial Intelligence. Cham: Springer. Park D. A. (2017) A Study on Conversational Public Administration Service of the Chatbot Based on Artificial Intelligence. Journal of Korea Multimedia Society, (20): 1347-1356.

Ranerup A., Henriksen H. Z. (2020) Digital Discretion: Unpacking Human and Technological Agency in Automated Decision Making in Sweden's Social Services. Social Science Computer Review, 40 (2): 445-461.

Rb Den Haag (2020) ECLI: NL: RBDHA:2020:1878. Available at: https://uitspraken.rechtspraak. nl/inziendocument?id=ECLI: NL: RBDHA:2020:1878 (accessed 05 June 2022). Ringes I. F. (2020) Frida Jobber D0gnet Rundt. Available at: https://memu.no/artikler/frida-jobber-dognet-rundt/ (accessed 10 June 2022).

Rinta-Kahila T., Someh T., Gillespie N., Indulska M., Gregor S. (2022) Algorithmic Decision-Making and System Destructiveness: A Case of Automatic Debt Recovery. European Journal of Information Systems, 31(3): 313-338.

Ruckenstein M., Velkova J. (2019) Automating Society 2019. Available at: https://algorithm-watch.org/en/automating-society-2019/finland/ (accessed 05 June 2022).

Theo A. (2018) Finnish AI Testing Successfully Identifies Future Retirees Facing Disability Pension, European Pensions. Available at: https://www.europeanpensions.net/ep/Finnish-AI-successfully-identifies-future-retirees-facing-disability-pension.php (accessed 20 May 2022). UNESCAP 75 (2019) Artificial Intelligence in The Delivery of Public Services. Available at: https://www.unescap.org/publications/artificial-intelligence-delivery-public-services_(accessed 20 May 2022).

Vaananen N. (2021) The Digital Transition of Social Security in Finland. Frontrunner Experiencing Headwinds? Zaklad Ubezpieczen Spolecznych, nR 4/2021 (151): 70-85. Vassilakopoulou P., Haug A., Salvesen L. M., Pappas L. O. (2022) Developing Human/AI Interactions for Chat-Based Customer Services: Lessons Learned from the Norwegian Government. European Journal of Information Systems, https://doi.org/10.1080/0960085X.2022.2096490.

Veale M., Binns R., Edwards L. (2018) Algorithms that Remember: Model Inversion Attacks and Data Protection Law. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, (376): 2133.

Vinnova (2018) Artificial Intelligence in Swedish Business and Society - Analysis of Development and Potential. Available at: https://www.vinnova.se/contentassets/29cd313d690e4be3a8 d861ad05a4ee48/vr_18_09.pdf (accessed 30 May 2022).

i Надоели баннеры? Вы всегда можете отключить рекламу.