Please only add 7 more pages. I already have 21 pages done (Attached). Please do not use any AI, no plagiarism, cite references and APA style is preferred. APA Format. you can add more references to
29
Ethics, Risks and Challenges of Artificial Intelligence
Akira Wells
Webster University
ITM 6000
Professor Erasmus McEady
February 23, 2025
Abstract
Artificial intelligence (AI) is changing literally all aspects of contemporary society more rapidly than ever before and presents deep philosophical challenges of morality, responsibility, and regulation. The study article explores the moral dangers and issues that AI systems introduce in modern human society. The paper analyzes four fundamental areas of problems based on interdisciplinary literature covering computer science, philosophy, law, and public policy: lack of regulatory systems, algorithm bias and discrimination, data privacy and data security threats, and accountability in autonomous decisions issues. The paper goes further to discuss the situational terrain where AI currently exists and the sectors that have been impacted the most and the stakeholders who have been most impacted. Based on the assumption that there is an ethical risk awareness and governance required to enable individuals to use AI responsibility, the paper wraps up by offering a governance model that provides a balance between innovation and protection. The paper has made an original contribution to scholarship based on the foundations of course work in information technology ethics, systems design, and digital policy, which contributes to the development of the responsible AI field.
Keywords: artificial intelligence, data privacy, accountability, AI ethics, professional standards, AI governance, algorithmic bias
Part 1: Introduction and Background.
Ethics, Risks and Challenges of Artificial Intelligence
Artificial intelligence is one of the most revolutionary changes in the history of technology in human existence. Large language models that can write text with human-like language abilities and autonomous systems that render real-time decisions in the healthcare sector, transportation, and criminal justice are only the tip of the iceberg of AI, which crosses the boundaries of its beginnings as an abstract concept in computer science to become an inseparable part of life. The rate at which this is changing, though, has proceeded much faster than the ethical frameworks, regulation infrastructure, and professional standards required to regulate it in a responsible manner.
Artificial intelligence does not only bring technical issues only. This is because a problem with AI ethics is fundamentally a human issue. Machines have no sense, no human feelings, and no sense of morality. They maximize goals as specified by their designers and usually in a manner that has unintentional and adverse side effects. When such systems are implemented in such high-stakes areas as medical diagnosis, credit scoring, hiring, and criminal sentencing, ethical failure stakes are not merely abstract but extremely personal and social. Biased algorithm does not simply give the wrong output. It is a systematic marginalization of some groups of persons, a repeating of historical evils and undermining of confidence in institutions by the populace.
This essay is structured in such a way that it gives an in-depth analysis of the ethical environment of AI. Part 1 provides the conceptual and situational background of the problem, such as important definitions, statement of the research problem, situational analysis of the existing AI adoption, and the premise governing the paper. Part 2 further elaborates the discussion by conducting an original literature evaluation and formulating a substantial, evidence based discussion concerning the main ethical risk factors and concludes with governance advice and the reflection of how previous course work experience in the field of information technology influenced the current study.
Statement of the Problem
The development of artificial intelligence is going on at an unprecedented rate, and it is occurring in an environment shaped by minimal regulation. AI development has low entry barriers, which are decreasing. The tools of AI have become democratized by open-source frameworks, cloud computer infrastructure, and already-trained models so that actors with both the resources and means, as well as the individuals without such resources, can deploy AI systems with little to no supervision. In contrast to sectors like the pharmaceutical industry or aviation where regulation infrastructure along with technological potential develops, the AI industry does not have a consistent, enforceable code of ethical behavior or a professional organization with a recognized accreditation to the community.
This regulatory no fly zone has created a fast and in large part unregulated adoption of AI in key sectors, such as academia, medicine, administrative governance, financial services, and national security. An important issue is the use of AI in decision-making that was previously the prerogative of human beings. Algorithms are becoming more and more important in the lives of more individuals as decisions regarding who gets a loan and who gets tagged as a criminal risk as well as who gets selected to take part in a job interview and who gets given a certain medical treatment are being delegated either completely or partially to algorithmic systems. These systems are not sentient and thus they are not able to empathize, reason about morality and be sensitive to contexts like responsible decision-making requires.
Accountability and privacy are also problematic issues, which complicate the issue further. The corporate accountability is spread when an AI system fails and delivers a toxic result. Is it the algorithm developer who wrote the algorithm? The organization which implemented it? Did historical bias in the training sets of the data provider? This lack of transparency allows a moral hazard where damaging AI applications are free to spread. At the same time, the data-hungry character of AI systems creates an acute risk to an individual the privacy of personal information is collected, processed and violated and in most cases without appropriate user consent and disclosure in large volumes of sensitive data.
There is an even more disturbing prospect at the technological edge. Writing and updating their own code AI systems creates a situation where the human element can gradually be phased out by machines, not with conscious will, but through the gradual outsourcing of the mental labor to machines. It is not just a science fiction situation. The existing large language models already contribute to software development in the magnitude that nobody could contemplate few years ago. This course towards increased autonomy of technical systems needs the immediate scholarly and regulatory focus.
Situational Analysis
The technological landscape of AI implementation is wide and has a rapid change. Applications of AI systems mentioned in the education sector include automated marking, customized learning systems, screening during admissions, and sanctioning academic dishonesty. Although these applications provide an efficiency benefit, there is also the concern of the bias in grading based on algorithms, commercialization of the learning process, and elimination of teacher-student relationships, which are central to holistic human development.
There are applications of AI in the field of healthcare and medicine, such as diagnostic imaging analysis, drug discovery, patient-triage, and clinical decision support. The stakes in this field are ones of a kind since mistakes can be fatal. Research has reported that AI diagnostic tool trained on the data of one particular group of individuals have far less accuracy when used on another group, which is an acute cause of health inequity. Besides, most of the clinical AI systems are opaque, thus limiting the ability of physicians to comprehend, dispute, or suppress machine-based suggestions.
Examples of AI in predictive policing, benefit determination and border control are being deployed in the governance and public administration realm. These are some of the most consequential using of state power, and the recorded cases of discriminatory performance of algorithmic policing and court risk assessment instruments are grave harm to civil liberties and democratic responsibility. An example is the COMPAS recidivism algorithm, which was an object of popular debate once it was found out that it showed high racial discrepancies in its risk evaluation.
Within the intelligence and national security field, AI has been incorporated in the surveillance systems, autonomous weaponry, and the cyber war capability. The ethical and existential questions that such applications lead to are the most fundamental, as they address the question of sovereignty, proportionality, the laws of armed conflict, and the future sustainability of the international order. The least regulated and possibly the most impactful is AI in this area.
In all these industries, one can see a similar pattern: AI systems are trained with the old data, and they inherit the biases present in their data. They are emotionless and unethical. They are not regulated by professional licensing, fiduciary obligations, or by professional responsibility oaths, as human professionals are in the case of medicine, law and engineering. And they are mobilized by organizations whose major motivation is not good of the people but good of competition and of a share price.
Premise
The current assumption that guides this study is the idea that the responsible utilization of artificial intelligence entails the existence of ethical risk awareness and governance. This assumption rules out two extreme stances that are often found in the discourse of the general population. The former is that of uncritical techno-optimism that AI is necessarily positive and that regulating its development will hinder the generation and advancement of innovation. The second is the techno- pessimism stance, according to which the AI is perilous in its essence and must be either highly restricted or prohibited.
The assumption in this case is much finer and much more challenging. It believes that AI is not good or bad, but its consequences are highly influenced by what the people creating and implementing it value, stimulate, and operate under. The concept of responsible AI has a chance of success but it requires intentional, life long, and tactical endeavor by the technologists, policy-makers, ethicists, civil society groups, and communities, to which AI systems have the greatest effect.
Key Definitions
To conduct this paper, the following definitions are adopted. Artificial intelligence is defined by the computational systems which are supposed to be used in performing the tasks which are traditionally known to require use of human thinking abilities such as reasoning, learning, problem-solving, perception and language comprehension. These systems vary between rule-based expert system to contemporary neural network systems that have been trained on massive data sets through machine learning methods.
Data privacy is defined as the right of the individuals to determine the gathering, application, and sharing of their personal details. Data privacy as a concept in the context of AI is associated with the mass collection of data on users to train algorithms, the impenetrable utilization of personal data in automated decision-making and the secondary use of data that is not initially approved.
The context of accountability, as applied in this paper, is the responsibility of actors that create, implement, or regulate AI systems, to be answerable to the outcomes of the said systems, and to take suitable responsibility when damage occurs. Accountability includes ex ante (e.g. carrying out impact assessment before deployment) and ex post (e.g. implementing redress mechanisms on people harmed by AI systems) accountability.
In the context of AI, ethics is the term used in reference to the usage of moral principles in the design, development, deployment and governance of the system of AI. Necessary ethical principles encompass fairness, transparency, accountability, beneficence, non-maleficence, respect towards autonomy and justice.
Professional standards are the codes of conduct, licensing as well as the norms of practice that govern professional in any field. There is a growing debate in the AI context of the necessity to develop analogous standards across AI developers and practitioners, based on engineering, medicine, and law models.
Limitations and Delimitations.
This study agrees that there are two main limitations. To begin with, AI technology changes at an exceptionally fast rate. Any current developments during the writing could be obsolete within months. The findings of the paper must thus be interpreted as representing the body of knowledge as at the beginning of 2026 and anticipation that it is likely that further learning will be required to keep up with the changing ethical context.
Second, the lack of longitudinal information and empirical evidence about the actual harm that AI systems inflict in the real world is a relative aspect of the field of AI ethics. Most of the available literature is theoretical, conjectural, or relies on the study of a few cases which are not highly generalization. Although this paper is based on the best available evidence, more strong empirical studies should be conducted.
The article is clearly stated with the elimination of technical issues of AI applications and robotics engineering. Although these aspects of AI are significant, they do not belong to the frames of the given ethically-oriented inquiry. The emphasis is made on the social, organizational, and governance aspects of AI risk.
Part 2: Literature Review, Analysis and Governance Recommendations.
Literature Review
The academic research on AI ethics has grown exponentially in the last ten years and is indicative of the increasing popular and academic interest in the social impact of AI systems. This critique is a survey of the main contributions in four thematic areas, which include algorithmic bias and discrimination, privacy and data governance, accountability and transparency, and regulative frameworks.
One of the most widely researched aspects of AI ethics has been algorithmic bias. The initial research in this field has shown that the error rates of facial recognition systems are dramatically larger with women and individuals of darker skin tones than with white men (Buolamwini et al., 2018). The implications were enormous on AI governance, as this finding followed the analysis of commercially deployed systems, and several major cities and jurisdictions have enforced a moratorium on facial recognition use by law enforcement. Later studies generalized the analysis of algorithmic bias to such areas as natural language processing, where word embedding models were found to represent gender and racial stereotypes found in the training data (Bolukbasi et al., 2016).
The expansion of the surveillance capitalism as the concept proposed by Shoshana Zuboff (2019) has contributed to the appearance of the literature on the topic of privacy and AI considerably since this concept assumes the idea of a new business model in the digital economy, which presupposes the extraction, commodification, and monetization of personal behavioral data as its basic principles. The systems of AI in this context should not be seen simply as a tool but as the means of an entirely new type of economic power that functions through predicting and influencing human behavior in large numbers. The privacy effects are not just technical in nature but structural, as in its insufficiency to the regulatory paradigms built in a time when data collection was limited to specific locations.
One of the most common topics in the AI ethics literature has been accountability and transparency, especially with regards to the lack of transparency of the contemporary machine learning systems. According to Lipton (2018), the interpretability of machine learning models was introduced with a distinction between transparent (substantially easier to read the decision logic of the model displayed) and post decision (approximately explain how the model works) explanations. The conflict between model performance and interpretability is not a new concept in the literature, as the most accurate models, such as deep neural networks, are often the least interpretable. The fact that disclosure in this manner is difficult poses some extreme issues when it comes to accountability, especially in the realm of legal practice, which may impose the right to clarification with regard to decisions that a person may face.
The AI regulation literature is abundant and constantly changing. In 2018, the European Union introduced the General Data Protection Regulation (GDPR) which introduced substantial new rights of individuals regarding automated decision-making with a significant right not to be subjected to only automated decisions with important consequences being a qualified right. The most extensive attempt to regulate AI so far based on a risk-based framework is the EU AI Act that came into force in 2024, classifying AI applications by their effectiveness to harm and making developers responsible. Scholars have questioned their sufficiency and some believe that the GDPR framework lacks structural viability to the AI context (Wachter et al., 2017) and other are worried about how high-risk AI in sensitive areas will be treated in the EU AI Act.
Outside of Europe, AI regulatory methods are very different. The US has traditionally been dependent upon sectoral and voluntary method, with government agencies including the Federal Trade Commission, the Equal Employment Opportunity Commission, and the Consumer Financial Protection Bureau providing their guidance regarding AI usage in their differing spheres. Critics have claimed that this way is too atomistic and too submissive to industry to work. China, in his turn, has committed to an iteration of specific AI regulations on particular applications, such as recommendation algorithms, synthetic media, and generative AI, and engaged in simultaneous strategic advantage in AI development as a national strategy. Such nationalism makes global governance difficult since AI systems and their impacts do not respect national boundaries.
In the medical field, researchers have expressed the concerns with the use of AI diagnostic tools that are not sufficiently validated with various patient groups, the consequences of AI-supported clinical decisions to the physician-patient relationship, and the ethics of using patient data to train AI further (Obermeyer et al., 2019). The quality of developed algorithmic risk assessment tools like COMPAS has led to a rich literature on the mathematical properties of fairness, why it is impossible to potentially achieve multiple statistical fairness criteria and what this means in terms of due process and equal protection (Dressel and Farid, 2018; Chouldechova, 2017).
Combined, the literature creates an image of a quickly changing technological environment where the risk of ethics is huge, the governance structure falls short, and the academic community is quite active in both problem identification and solution design. The paper is a contribution to this literature by summarizing these themes and putting them in a logical framework of understanding the risks of AI ethics and offering practical recommendations to govern AI use based on the assumptions that the process of AI ethical awareness and control is necessary as a precondition to responsible AI use.
Principal Ethical Risks of Artificial Intelligence
Based on the analysis conducted in Part 1 and the reviewed literature above, the systematic analysis of the main ethical risks related to artificial intelligence conducted in this section. These risks fall into four categories being discrimination and inequality, loss of privacy, accountability gaps and systemic and existential risks.
Inequality and Discrimination. The ethical risk of AI that has been best documented is probably algorithmic discrimination. It has the following causes: biased training data showing the pattern of discrimination in history, proxy variables that are associated with the features of protection, and missionary AI systems that produce metrics that are oriented towards protecting some camps. The effect of algorithmic discrimination does not have even distributions. They have the greatest impact on already marginalized groups of people, such as people of color, and women, low-income individuals, and persons with disabilities, thus worsening rather than improving the status of the existing social inequalities.
Feedback loop bias is a very harmful type of algorithmic discrimination. When an AI system is released in an area like predictive policing, and its results are used in determining the deployment of policing resources, the arrests then created are used to create additional data that will confirm the initial model projections. This becomes a self-fulfilling prophecy that can only be corrected by a complete re-design of this system. It is not just that the AI system is flawed but that it makes the historical injustices systematized and magnified in such a manner that it is hard to catch and to confront.
Privacy Erosion. The modern AI systems are data intensive and pose deep and manifold threats to privacy. On its simplest level, AI systems are unable to train effectively without substantial amounts of personal data and collection of such data is done without material transparency or familiarity with participants. On a larger scale, by combining data across numerous sources, AI systems can use such data, which by itself is seemingly harmless, in order to draw conclusions about sensitive personal attributes, such as health conditions, sexuality, political views, and the financial situation.
Generative AI systems provide novel aspects of privacy threat. Large language model training using data scraped off the internet has led to a notion of intellectual property and individual privacy of individuals whose personal data is being used in the training data. The creation of synthetic media, such as photorealistic pictures and video of real people allow the formation of new types of non-consensual media production, identity fraud, as well as misinformation, which current legal systems are not well-equipped to handle.
Privacy is the most urgent issue in the use of AI in collaboration with the surveillance infrastructure. With AI-intuited facial recognition in social places, there is no room to enjoy anonymity imbuing citizens with the traditional power of their movements under public life. Together with mobile device trace, monitoring social media, and financial transactions, AI-powered surveillance establishes the technological prerequisites of a certain degree of social control that would have been unthinkable before. Despite the democratic world, the chilling effect of normalization of pervasive surveillance on free expression, political dissent, and civil society takes place even in the democratic world.
Accountability Gaps. Several accountability loopholes arise due to the decentralization of the AI development and deployment chain that is both an ethics violation and a governance issue. Responsibility in case of an AIs harm, the list of the possible culprits consists of the researchers who created the original algorithms, the engineers who applied them, the organizations that trained the system, the companies that made it available, and the regulators that could not block its usage. Practically, all these players can pass the responsibility of the chain to others.
The accountability issue is complicated by the fact that most AI systems are opaque. When a rejected loan applicant, the claimant of denied benefits, or other accused criminal has no understanding of what an AI system used to make the determination affecting them, then another opportunity to appeal the decision is essentially removed. This contravenes the basic disorders of procedural fairness and due process. Although some legal systems have a right to explanation, including the GDPR, the right to explanation, however, is poorly defined, and its practical use is uneven.
Another aspect to the accountability issue is the temporal difference between the deployment and the harm. AI systems can be implemented on a large scale prior to the occasional effects of their operations being identified. By the time damage is detected, the systems might be found far too buried within organizational processes thus making it difficult and expensive to remedy. This action as time becomes the problem inclined towards initial and continuous moral scrutiny, instead of backward rectification.
Systemic and Existential Risks. In addition to the direct ills recorded in the literature, AI has threats to social institutions and democratic systems of operation on a systemic scale. The epistemic basis of democratic deliberation is in danger by the application of AI to produce disinformation, synthetic media mass production, manipulative content, or influence campaigns on a target population. When people are unable to perceive true and fake information, the conditions of informed voter turnout are compromised.
The frontiers of AI capability intensify the debate by suggesting more autonomous AI version which is challenging the current ethical standards. It is not a current matter of operation in that the development of AI systems with recursive capabilities to self-improve, or the development of AI systems that use AI goals in manners incongruent with human values or interests, but it is a grave topic of study and discussion among AI safety scholars. The governance systems created in the present AI systems will have to be open-minded enough and farsighted in order to accommodate the risks of the more efficient AI systems of the future.
The Role of Ethics Education and Professional Standards
The lack of strong professional standards of AI developers and practitioners is one of the most prominent gaps present in the existing AI governance. Professional licensing regimes in a broad range of professions, including medicine, law, and engineering, have many functions, including, but not limited to, setting basic competence thresholds, providing disciplinary sanctions in the event of breach of ethical obligations as condition of practice, developing a professional culture that values and enforces ethical behavior intrinsically, and providing a culture of engineering practice that upholds professional ethics.
There is no similar regulation of AI professionals. Though professional organizations like Association of Computer machineries (ACM) and Institute of electrical and electronics engineers (IEEE) have come up with codes of ethics to govern the behavior of computer scientists and engineers, the codes are rather rhetorical and there are poor enforcement mechanisms. The lack of serious professional norms preconditions the fact that the individual AI developers have no formal ethical responsibilities and no professional penalty on with regard to developing harm-causing systems.
Integrating ethics education into the computer science and AI programs is a required but limited reaction to this issue. Although it is true that the practitioners working in AI fields need to be knowledgeable of ethical aspects of work, only education will not be able to substitute structural incentive schemes and accountability that the profession standards are offered. An all-encompassing strategy of AI professional standards would be a combination of ethics training with credentialing, codes of behavior, independent ethics control systems, and disciplinary measures similar to those that work in dissimilar professions.
The research based on the coursework has contributed directly towards this aspect of the analysis. The knowledge of the information technology ethics, systems design theory, digital policy, and organizational governance has equipped me with the intellectual tools to begin to comprehend AI ethics not as an abstraction of philosophical issue but as a governance problem that has institutional (and organizational) and professional aspects. This interdisciplinary can be seen in the discussion conducted in this paper that relies on computer science, philosophy, law, and public policy to create a holistic approach to the emergent ethics dangers and regulation of AI.
A Framework for Ethical AI Governance
This part will integrate the analysis above into a set of ethical AI governance proposals. The framework is systematized into five pillars that depend on one another and they are transparency, accountability, fairness, protection of privacy, and professional responsibility. All of these pillars make up an all-around approach to responsible AI development and implementation that is premised upon by ethical governance as a prerequisite to the positive usage of AI.
To achieve transparency, AI systems need to be designed and implemented in a manner that allows significant insights into the manner they operate, the data they use, and the outcomes they generate. The concept of transparency does not necessitate that every AI system has the ability to be completely understood in technical regards. It also demands that developers share material information regarding their systems with the deploying organizations, and that deploying organizations share material information with the affected individuals, and that regulators possess adequate information with which to evaluate adherence to the relevant standards. Accountability presupposes transparency. It is also a condition to trust society, which is another precondition to the positive incorporation of AI into social institutions.
Accountability also demands the development of clear lines of responsibility of AI systems through the development and deployment chain. This consists of organizational responsibility, by using tools as obligatory impact evaluations, audit criteria, and incident reporting needs, as well as of individual accountability, by using the current lawful principles of negligence, product liability, and professional wrongdoing. The creation of AI-specific liability schemes is a topical conformity of legal studies and the paper supports the belief that liability regulations should incorporate the outer charges of harmful AI systems, as well as generate incentives in the area of responsible developmental activities.
Fairness means that AI systems should be implemented and developed in a manner that denies discriminatory results and positively enhances non-discriminatory treatment and equal opportunities. This is technically challenging, because it has been reported difficult to meet several mathematical definitions of fairness at once, and socially controversial, because there are still debates over the definition of fairness in various normative cultures. At the very least, fairness would mean that developers of AI should run well-organized demographic audits of their systems, decouple the performance indicators used, and engage the societies concerned in the design and assessment of AI systems that impact their lives.
The protection of privacy should be that the AI systems are to be constructed so that they can consider privacy as a principle instead of an incidental factor. These will be privacy-by-design principles in the architecture of AI systems, minimization of data collection to that strictly required by legitimate purposes, strong data security and provision of significant person-related rights over the personal data utilized in AI systems. Emerging privacy preserving AI methods, such as federated learning and differential privacy, provide potentially useful technical means to accomplish this goal, but must be integrated into organizational culture and regulated to see such methods used.
The professional responsibility entails establishing strong guidelines to the AI practitioners that are similar to those of other professions with a substantial effect on the population. This involves instituting credentialing standards of AI professionals operating in the high risk sectors, establishing and applying binding codes of professional conduct, creating professional checks and balances of ethics through AI development bodies, and fostering of a professional culture in which ethical behavior is inherently encouraged and reinforced through both institutional culture.
Discussion: Coursework contributions in the Research.
The research paper is an extension of the understanding made on the cloud of information technology through preceding courses. The overview of information systems and their implications to the organization furnished the insight of the development of technology, its implementation and control in organizations which forms the basis of accountability analysis in the current paper. Digital ethics and technology policy study offered the philosophical and legal contexts within which the concept of algorithmic fairness and right to explanation has been examined.
Data management and cybersecurity course assignments helped specifically add to the privacy risks analysis as it provided technical insight into the ways in which personal data is gathered, processed, and security maintained or is not maintained, in AI systems. The research on the field of project management and systems development life cycles also helped to adopt the ethical risk management analysis, illustrating how ethical concerns could be incorporated into the development process at every step of the process, starting with the requirements definition to the deployment of the development product and its monitoring.
The information technology curriculum has had the interdisciplinary nature that has been most useful in shaping the approach of analysis followed in this paper. A fluidity of movement across technical, organizational, legal, and philosophical views on AI ethics is an artifact of a thinking process developed in the process of taking a course that includes computer science, management, legal studies, and social science. The paper in question will be a sort of an interdisciplinary training applied to a problem that is of great significance in the modern world.
Conclusion
Artificial intelligence poses one of the most characteristic governance issues of the twenty-first century on humanity. Its advantages cannot be overestimated: AI can be used to speed up the process of scientific discovery, positively impact the results of healthcare, and can be applied in the education field to better the accessibility of education, as well as to make the work of the governmental structures more efficient. However, its dangers are just as genuine and dangerous, as they will include algorithmic discrimination, loss of privacy, lack of accountability, and risks to the democratic institutions as a system.
The thesis statement of this paper is that it is neither the realization of the benefits nor the mitigation of the risks of AI that are mutually exclusive goals, rather the two are mutually reinforcing. Ethical governance not only does not stand in the way of positive AI but is a pre-condition to it. Opaque, discriminatory, accountable, and privacy-violating systems based on AI will never be able to win the trust of the people so as to be integrated in social institutions to its own benefit. A strong governance framework, professional standards, and norms of ethical practices development is thus not a moral obligation but a practical necessity to achieving AI sustainable development.
The paper has helped answer this need by offering a significant analysis of AI ethics risk based on interdisciplinary scholarly efforts, synthesizing the literature into a managed governance structure, and by contemplating how the researcher has been trained in information technology to approach these issues constructively in the past coursework. Responsible AI governance is an urgent, collective, and ongoing work, and the paper will be a contribution toward the necessary undertaking.
References
Bolukbasi, T., Chang, K. W., Zou, J. Y., Saligrama, V., & Kalai, A. T. (2016). Man is to computer programmer as woman is to homemaker? Debiasing word embeddings. Advances in Neural Information Processing Systems, 29, 4349–4357.
Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. Proceedings of Machine Learning Research, 81, 1–15.
Chouldechova, A. (2017). Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big Data, 5(2), 153–163. https://doi.org/10.1089/big.2016.0047
Dressel, J., & Farid, H. (2018). The accuracy, fairness, and limits of predicting recidivism. Science Advances, 4(1), eaao5580.
European Parliament and Council of the European Union. (2024). Regulation (EU) 2024/1689 of the European Parliament and of the Council laying down harmonized rules on artificial intelligence (Artificial Intelligence Act). Official Journal of the European Union, L 1689.
European Parliament and Council of the European Union. (2016). Regulation (EU) 2016/679 (General Data Protection Regulation). Official Journal of the European Union, L 119, 1–88.
Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389–399. https://doi.org/10.1038/s42256-019-0088-2
Lipton, Z. C. (2018). The mythos of model interpretability. Queue, 16(3), 31–57. https://doi.org/10.1145/3236386.3241340
Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2), 1–21. https://doi.org/10.1177/2053951716679679
Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447–453. https://doi.org/10.1126/science.aax2342
Russell, S., & Norvig, P. (2021). Artificial intelligence: A modern approach (4th ed.). Pearson Education.
Wachter, S., Mittelstadt, B., & Russell, C. (2017). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. Harvard Journal of Law & Technology, 31(2), 841–887. https://doi.org/10.2139/ssrn.3063289
Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. PublicAffairs.