ADCOM 2024 – Call for Papers

ADCOM 2024 calls for original, high impact research papers on Responsible AI Framework and related technologies while seeking novel contributions that help define Trust and Security Management in large AI systems, which include, but not limited to:  

  • Ethical considerations in AI algorithm development
  • Fairness, accountability, and transparency in AI algorithms
  • AI Unlearning and Explainability
  • MLOps, DevOps, and AIOps for responsible AI and data science systems
  • Algorithmic bias and fairness in search and recommendation
  • Bias mitigation and fairness-aware machine learning
  • Software verification and validation for responsible AI and data science systems
  • Legal, ethical, and regulatory frameworks for responsible AI
  • Approaches for ensuring calibrated trust in AI
  • Actionable metrics that can be measured and monitored for AI ethics
  • Trust, Security, Privacy, Policy management
  • Agent Based Trust Management
  • Trustworthy and Responsible Autonomy
  • Authorization, Authentication and Identity Management
  • Tooling for ModelOps, proactive data protection, AI-specific security, model monitoring
  • Monitoring for data drift, model drift, and/or unintended outcomes
  • Risk controls for inputs and outputs to third-party models and applications.
Important Dates
Full Technical Papers Sought by30 July 2024
Acceptance Communicated by15 Oct 2024
Final Print Ready Papers due by25 Nov 2024

Submissions only through EasyChair

Download the pdf version of ADCOM 2024 – Call for papers

All accepted papers of ADCOM 2024 will be published in the CCIS series of Springer. CCIS is abstracted/indexed in DBLP, Google Scholar, EI-Compendex, Mathematical Reviews, SCImago and Scopus

Information for Authors of Springer Computer Science Proceedings