5th MANILA International Conference on Artificial Intelligence: Challenges, Issues & Impacts (AICII-26) scheduled on March 30-April 1, 2026 Manila (Philippines)

AICII-26


Artificial Intelligence



Topics/Call for Papers



Full Articles/ Reviews/ Shorts Papers/ Abstracts are welcomed in the following research fields:



The field of Artificial Intelligence (AI): Challenges, Issues & Impacts is one of the most critical and fast-evolving areas of discussion across technology, ethics, law, and economics. A conference on this theme would address the immense power, inherent risks, and transformative effects of AI systems.



The topics can be organized into three core pillars:



 



1. ⚙️ Challenges in AI Development (Technical & Implementation)



This pillar focuses on the limitations, difficulties, and resource demands inherent in building and deploying AI systems, especially large-scale models like Generative AI.





  • Explainability and Interpretability (XAI):





    • The "Black Box" problem: Making complex deep learning models understandable.




    • Methods for Explainable AI (XAI) to build user trust and enable auditing.






  • Data and Robustness:





    • Challenges in curating, securing, and standardizing the massive datasets required for training frontier models.




    • Data Privacy vs. Data Utility: Reconciling AI's "hunger" for data with privacy regulations (like GDPR).




    • Adversarial Attacks and Security: Protecting AI systems from malicious input and data poisoning.






  • Resource and Environmental Impacts:





    • The massive computational cost (GPUs) and energy consumption required for training large models.




    • Environmental Footprint: Water consumption for cooling data centers and the contribution of AI infrastructure to e-waste and carbon emissions.






  • Scalability and Integration:





    • Difficulties in scaling AI initiatives from successful pilots to full enterprise implementation.




    • Challenges of integrating modern AI tools with legacy or outdated IT systems.







 



2. ⚖️ Ethical Issues and Societal Bias (Bias & Fairness)



This pillar addresses the direct harm and unfair outcomes that AI systems can perpetuate or create, particularly in high-stakes domains.





  • Bias and Discrimination:





    • Algorithmic Bias: Identifying and mitigating bias embedded in training data (historical, cultural, societal) that leads to discriminatory outcomes.




    • Impact of bias in critical sectors: Hiring/Recruitment, Credit Lending, Healthcare Diagnostics, and Criminal Justice (e.g., predictive policing).






  • Accountability and Liability:





    • Determining who is legally responsible when an autonomous AI system (e.g., a self-driving car or a diagnostic tool) causes harm or makes an incorrect decision.




    • Defining the scope of human oversight and control over increasingly autonomous AI agents.






  • Manipulation and Misinformation:





    • The creation and dissemination of Deepfakes (audio, video, text) and their impact on democracy, politics, and trust.




    • AI's role in amplifying filter bubbles and polarizing public opinion through biased content recommendations.






  • Creativity and Ownership (IP):





    • Intellectual Property (IP) and Copyright challenges for content (text, image, music) generated by AI models trained on existing copyrighted works.




    • Defining originality and ownership of AI-generated creative works.







 



3. 🌐 Economic and Human Impacts (The Transformation)



This pillar examines the large-scale effects of AI on the global economy, labor market, and fundamental human relationships.





  • The Future of Work and Labor Economics:





    • Job Displacement and Automation: Assessing the risk of automation across different sectors (physical and mental/white-collar work).




    • Workforce Transformation: The need for massive reskilling and upskilling programs to prepare workers for new, technical AI-related jobs.




    • Economic Inequality: Analyzing how AI-generated wealth and productivity gains are distributed across companies, countries, and socioeconomic classes.






  • Governance and Regulation (Policy):





    • Global AI Regulation: Comparing approaches (e.g., the EU AI Act, US executive orders) and seeking international alignment on standards.




    • Risk-Based Regulation: Developing frameworks that adjust regulatory intensity based on the level of risk posed by a specific AI application (e.g., unacceptable, high, minimal).




    • Digital Sovereignty: The role of nations and international bodies (like UNESCO) in establishing ethical guidelines and controlling the deployment of AI.






  • Impacts on Specific Sectors:





    • AI in Public Health (e.g., personalized medicine, drug discovery) and the accompanying privacy risks.




    • AI in Education (e.g., personalized learning, automated grading) and its effect on the role of human educators.




    • The use of AI in Warfare and Security (e.g., autonomous weapons systems).