The Review Process

AIMC 2025’s paper program committee is assembled from experts in the AI music research community and listed on the AIMC 2025 website. Regular program committee members will be assigned 3-5 papers to review. Each paper will receive at least 3 reviews. Senior program committee members will be assigned additional meta-review duties.

Reviews

For each paper, reviewers will be asked to:

  • Rate the paper on a scale from 1 ( strong reject) to 5 (strong accept).
  • Rate the reviewer’s own confidence on a scale of 1 (low confidence) to 5 (high confidence).
  • Indicate whether the submission would be better suited to a demo session for work in development.
  • Provide a written review to the authors.
  • Add any additional confidential comments to the paper chair.

Reviews should show substantial engagement with the paper and provide constructive, respectful feedback. Reviews should begin with a short summary of the paper to confirm the reviewer’s understanding of the work. They should contain a clear rationale for the score given and a statement of any expectations of revisions needed for acceptance. Reviewers should take reasonable steps not to identify themselves and should avoid recommending their own work for citation.

Reviews should consider the following:

  • Relevance to AIMC community and grounding in relevant literature.
  • The originality or novelty of the submission as a contribution to the conference.
  • The artistic/scientific/theoretical quality of the submission.
  • The readability and organisation of this paper.
  • The appropriate use of methodology and reasonableness of claims.
  • Ethical standards.
  • Relationship of the submission to the conference theme.

Reviewers should clearly call out unreasonable claims, potentially misleading use of evidence, or a weak methodology, such as the following:

  • Papers that seek to show the benefits of a new algorithm, program, process etc. should do so via rich analysis of users’ experience, seeking to identify shortcomings as well as benefits, and avoiding ungrounded affirmations.
  • Papers that seek to show benefits of applying AI music to areas such as health, wellbeing, disability support, community participation and inclusion, or the democratisation of creative practices, should show a depth of engagement with the problem space. For example, if a paper suggests benefits of AI music to people with disability, reviewers should rightly question whether such work has been well-grounded in an understanding of the needs of those communities.
  • There is no formal expectation for quantitative user results or performer benchmarks, and AIMC welcomes practice-based and practice-led approaches to new knowledge. While more formal results will significantly improve a paper’s impact, and should be encouraged, authors and reviewers should equally scrutinise the claims made in relation to those results. Formal studies can both benefit and undermine, if done badly, a great piece of academic work.

Metareview and Final Decision process

After reviewing, a light metareview process will follow. The purpose of metareviewing is to address contradictions between reviewers and align scoring standards. Metareviwers should review the reviews and ask reviewers to discuss any points of contradiction. They should then collate and summarise the key points, clearly stating any revisions needed, and provide a final rating (1-5, strong reject to strong accept). During discussion, reviewers should not update their original comments or scores unless they have made a clear error.

The chair will make final decisions based on the meta review ratings. In discussion with senior PC members, they reserve the right to make final decisions on acceptance considering issues of equity, diversity and inclusion and the overall academic profile of the conference.

Tutorial and Workshops Reviews

The reviewing process follows the same general principles as the Paper one, but a more streamlined process. Tutorial and Workshops submissions will be subject to single-blind peer-review by at least two reviewers. There will be no metareviewing phase, and the final decisions will be taken jointly by the tutorial and conference chairs, depending on the quality of the proposed activities as well as limits imposed by the conference schedule.

Contributors may be requested to slightly adapt their proposals to better fit the schedule and/or other organizational requirements.