• Venue: San Francisco Marriott Marquis
  • Diversity and Inclusion Statement
  • Code of Conduct
  • Registration
  • Visa Application Support
  • Travel Support
  • Social Excursion at Exploratorium
  • Virtual Attendance
  • In-person Attendance Information
  • Sponsorship Opportunities
  • Sponsors and Supporters
  • Complete Program
  • Your Program
  • Proceedings
  • ESEC/FSE 2023
  • Plenary Events

How to Submit

  • Research Papers
  • Industry Papers
  • Ideas, Visions and Reflections
  • Journal First
  • Student Research Competition
  • Demonstrations
  • New Faculty Symposium
  • Student Volunteers
  • ROSE festival
  • Doctoral Symposium
  • Industry Mentoring Symposium

Open Science Policy

  • Co-hosted Conferences
  • [virtual] Gamify
  • [virtual] QP4SE
  • Co-hosted Symposia
  • SSBSE Research Papers
  • SSBSE Hot off the Press
  • SSBSE Keynote
  • SSBSE RENE / NIER
  • SSBSE Challenge Track
  • ESEC/FSE 2023 Committees
  • Organizing Committee
  • Steering Committee
  • Test of Time Award Committee
  • Track Committees
  • Contributors
  • People Index
  • Program Committee
  • N/A - check homepage
  • Hot off the Press
  • RENE / NIER
  • Challenge Track
  • ESEC/FSE 2022
  • ESEC/FSE 2021
  • ESEC/FSE 2020
  • ESEC/FSE 2018

Research Papers ESEC/FSE 2023

Accepted papers, call for papers, program display configuration, tue 5 dec displayed time zone: pacific time (us & canada) change, wed 6 dec displayed time zone: pacific time (us & canada) change, thu 7 dec displayed time zone: pacific time (us & canada) change.

We invite high-quality submissions, from both industry and academia, describing original and unpublished results of theoretical, empirical, conceptual, and experimental software engineering research.

NEW THIS YEAR: major revisions allowed! The main novelty of this year’s review process is that the initial output can be accept, reject or major revision . In case a paper is deemed publishable upon major revision, authors are granted 8 weeks to perform major revisions, which might include additional experiments or new analyses of existing results; major rewriting of algorithms and explanations; clarifications, better scoping, and improved motivations. The same reviewers who requested major revisions will then assess whether the revised submission satisfies their requests adequately.

NEW THIS YEAR: research methods declaration! This year, in addition to declaring the topics which are relevant for their submissions, authors will be asked to declare the research methods employed in their submissions. This will enable us to ensure reviewer expertise both for research methods and topics. For full definitions of the research methods, see the SIGSOFT Empirical Standards .

NEW THIS YEAR: adoption of SIGSOFT Open Science Policy! This year, ESEC/FSE has adopted the SIGSOFT Open Science Policy, and we encourage authors to provide a replication package. For authors, this means providing a supporting statement on the availability of a replication package (or lack thereof) in their submitted papers in a section named Data Availability after the Conclusion section. See more details below .

Contributions should describe innovative and significant original research. Papers describing groundbreaking approaches to emerging problems are also welcome, as well as replication papers. Submissions that facilitate reproducibility by using available datasets or making the described tools and datasets publicly available are especially encouraged. For a list of specific topics of interest, please see the end of this call.

At the time of submission, all papers must conform to the ESEC/FSE 2023 Format and Submission Guidelines , and must not exceed 10 pages for all text and figures plus 2 pages for references. All submissions must be in English and in PDF format. Submissions that do not comply with the above instructions will be desk rejected without review. Papers must be submitted electronically through the ESEC/FSE submission site:

https://esecfse2023.hotcrp.com/

Each submission will be reviewed by at least three members of the program committee. When the initial output of the three reviews is major revision , authors will have an opportunity to address the reviewers’ requests during an 8 week major revision period. The revised submission must be accompanied by a response letter , where the authors explain how they addressed each concern expressed by the reviewers.

Submissions will be evaluated on the basis of originality, importance of contribution, soundness, evaluation (if relevant), quality of presentation, and appropriate comparison to related work. Some papers may have more than three reviews, as PC chairs may solicit additional reviews based on factors such as reviewer expertise and strong disagreement between reviewers. The program committee as a whole will make final decisions about which submissions to accept for presentation at the conference.

Double-Anonymous Review Process

In order to ensure the fairness of the reviewing process, the ESEC/FSE 2023 Research Papers Track will employ a double-anonymous review process , where external reviewers do not know the identity of authors, and authors do not know the identity of external reviewers. The papers submitted must not reveal the authors’ identities in any way:

  • Authors should leave out author names and affiliations from the body of their submission.
  • Authors should ensure that any citation to related work by themselves is written in third person, that is, “the prior work of XYZ” as opposed to “our prior work”.
  • Authors should not include URLs to author-revealing sites (tools, datasets). Authors are still encouraged to follow open science principles and submit replication packages, see more details on the open science policy below.
  • Authors should anonymize author-revealing company names but instead provide general characteristics of the organisations involved needed to understand the context of the paper.
  • Authors should ensure that paper acknowledgements do not reveal the origin of their work.

The double-anonymous process used this year is “heavy”, i.e., the paper anonymity will be maintained during all reviewing and discussion periods. In case of major revision, authors must therefore maintain anonymity in their response letter and must provide no additional information that could be author-revealing.

To facilitate double-anonymous reviewing, we recommend the authors to postpone publishing their submitted work on arXiv or similar sites until after the notification . If the authors have uploaded to arXiv or similar, they should avoid specifying that the manuscript was submitted to ESEC/FSE 2023.

Authors with further questions on double-anonymous reviewing are encouraged to contact the program chairs by email. Papers that do not comply with the double-anonymous review process will be desk-rejected.

Submission Policies

Papers submitted for consideration to ESEC/FSE should not have been already published elsewhere and should not be under review or submitted for review elsewhere during the reviewing period. Specifically, authors are required to adhere to the ACM Policy and Procedures on Plagiarism and the ACM Policy on Prior Publication and Simultaneous Submissions .

To prevent double submissions, the chairs might compare the submissions with related conferences that have overlapping review periods. The double submission restriction applies only to refereed journals and conferences, not to unrefereed forums (e.g. arXiv.org ). To check for plagiarism issues, the chairs might use external plagiarism detection software.

All publications are subject to the ACM Author Representations policy .

By submitting your article to an ACM Publication, you are hereby acknowledging that you and your co-authors are subject to all ACM Publications Policies , including ACM’s new Publications Policy on Research Involving Human Participants and Subjects .

Alleged violations to any of the above policies will be reported to ACM for further investigation and may result in a full retraction of your paper, in addition to other potential penalties, as per the ACM Publications Policies .

Please ensure that you and your co-authors obtain an ORCID ID, so you can complete the publishing process if your paper is accepted. ACM has been involved in ORCID from the start and they have recently made a commitment to collect ORCID IDs from all of published authors. The collection process has started and will roll out as a requirement throughout 2022. ACM is committed to improve author discoverability, ensure proper attribution and contribute to ongoing community efforts around name normalization; your ORCID ID will help in these efforts.

Important Dates

All dates are 23:59:59 AoE (UTC-12h)

  • Paper registration: 26 January, 2023 (to register a paper, only the paper title, author list and some additional metadata are required)
  • Full paper submission: 2 February, 2023
  • Initial notification: 4 May, 2023
  • Revised manuscript submissions (major revisions only): 29 June, 2023
  • Final notification for major revisions: 27 July, 2023
  • Camera ready: 24 August, 2023

NOTE: The official publication date is the date the proceedings are made available in the ACM Digital Library. This date may be up to two weeks prior to the first day of the conference. The official publication date affects the deadline for any patent filings related to published work.

The research track of ESEC/FSE has introduced an open science policy. Openness in science is key to fostering scientific progress via transparency, reproducibility, and replicability. The steering principle is that all research results should be accessible to the public, if possible, and that empirical studies should be reproducible. In particular, we actively support the adoption of open data and open source principles and encourage all contributing authors to disclose (anonymized and curated) data to increase reproducibility and replicability.

Upon submission to the research track, authors are asked to make a replication package available to the program committee (via upload of supplemental material or a link to a private or public repository) or to comment on why this is not possible or desirable . Furthermore, authors are asked to indicate whether they intend to make their data publicly available upon acceptance. We ask authors to provide a supporting statement on the availability of a replication package (or lack thereof) in their submitted papers in a section named Data Availability after the Conclusion section. Be careful that such statements continue to maintain author anonymity. For more details, see the ESEC/FSE open science policy .

Authors of accepted papers will be given an opportunity (and encouragement) to submit their data and tools to the separate ESEC/FSE’23 artefact evaluation committee.

Topics of Interest

Topics of interest include, but are not limited to:

  • Artificial intelligence and machine learning for software engineering
  • Autonomic computing
  • Debugging and fault localization
  • Dependability, safety, and reliability
  • Distributed and collaborative software engineering
  • Embedded software, safety-critical systems, and cyber-physical systems
  • Empirical software engineering
  • Human aspects of software engineering
  • Human-computer interaction
  • Mining software repositories
  • Mobile development
  • Model checking
  • Model-driven engineering
  • Parallel, distributed, and concurrent systems
  • Performance engineering
  • Program analysis
  • Program comprehension
  • Program repair
  • Program synthesis
  • Programming languages
  • Recommendation systems
  • Requirements engineering
  • Search-based software engineering
  • Services, components, and cloud
  • Software architectures
  • Software engineering education
  • Software engineering for machine learning and artificial intelligence
  • Software evolution
  • Software processes
  • Software security
  • Software testing
  • Software traceability
  • Symbolic execution
  • Tools and environments

FAQ on Review Process: Major Revisions, Open Science Policy, Double-Anonymous Reviewing

Major revision process.

Q: Why is ESEC/FSE allowing major revisions?

A: SE conferences are currently forced to reject papers that include valuable material, but would need major changes to become acceptable for conference presentation, because major revisions cannot be accommodated in the current review process. By supporting only a binary outcome, conferences force reviewers to decide between rejection and acceptance even in borderline cases that would be better judged after a round of major revision. This can cause additional reviewing burden for the community (the paper is resubmitted to another venue with new reviewers) and inconsistency for the authors (the new reviewers have different opinions). We hope by allowing major revisions to both increase the acceptance rate of ESEC/FSE and to help reduce these current problems with the reviewing process.

For Authors

Q: If my paper receives major revisions, what happens next?

A: The meta-review will clearly and explicitly list all major changes required by the reviewers to make the paper acceptable for publication. Authors of these papers are granted 8 weeks to implement the requested changes. In addition to the revised paper, authors are asked to submit a response letter that explains how each required change was implemented. If any change was not implemented, authors can explain why. The same reviewers will then review the revised paper and make their final (binary) decision. Authors can also choose to withdraw their submission if they wish.

Q: Will major revision become the default decision causing initial acceptance rates to drop?

A: This is not the intention: reviewers are instructed to accept all papers that would have been accepted when major revision was not an available outcome.

For Reviewers

Q: When shall I recommend major revision for a paper?

A: Major revision should not become the default choice for borderline papers and should be used only if without major revisions the paper would be rejected, while a properly done major revision, which addresses the reviewers’ concerns, could make the paper acceptable for publication; if the requested changes are doable in 8 weeks and are implementable within the page limit; if the requested changes are strictly necessary for paper acceptance (i.e., not just nice-to-have features); if the requested changes require recheck (i.e., reviewers cannot trust the authors to implement them directly in the camera ready).

Q: When shall I recommend rejection instead of major revision ?

A: Rejection is a more appropriate outcome than major revision if the requested additions/changes are not implementable in 8 weeks; if the contribution is very narrow or not relevant to the SE audience, and it cannot be retargeted in 8 weeks; if the methodology is flawed and cannot be fixed in 8 weeks; if results are unconvincing, the paper does not seem to improve the state of the art much, and new convincing results are unlikely to be available after 8 weeks of further experiments; if the customary benchmark used in the community was ignored and cannot be adopted and compared to in 8 weeks.

Q: When shall I recommend acceptance instead of major revision ?

A: We do not want major revision to become the primary pathway for acceptance. We should continue to trust the authors to make minor changes to the submissions in the camera ready version. Acceptance is preferable if the requested additions/changes are nice to have features, not mandatory for the acceptability of the work; if minor improvements of the text are needed; if minor clarifications requested by the reviewers should be incorporated; if important but not critical references should be added and discussed; if discussion of results could be improved, but the current one is already sufficient.

Q: What is the difference between major revision and shepherding ?

A: Major revision is not shepherding . While shepherding typically focuses on important but minor changes, which can be specified in an operational way and can be checked quite easily and quickly by reviewers, major revisions require major changes (although doable in 8 weeks), which means the instructions for the authors cannot be completely operational and the check will need to go deeply into the new content delivered by the paper. Hence, while the expectation for shepherded papers is that most of them will be accepted once the requested changes are implemented, this is not necessarily the case with major revisions.

Q: Is there a quota of papers that can have major revision as outcome?

A: As there is no quota for the accepted papers, there is also no quota for major revisions. However, we expect that thanks to major revisions we will be able to eventually accept 10-15% more papers, while keeping the quality bar absolutely unchanged.

Q: What shall I write in the meta-review of a paper with major revision outcome?

A: With the possibility of a major revision outcome, meta-reviews become extremely important. The meta-review should clearly and explicitly list all major changes required by the reviewers to make the paper acceptable for publication. The meta-review should act as a contract between reviewers and authors, such that when all required changes are properly made, the paper is accepted. In this respect, the listed changes should be extremely clear, precise, and implementable.

Review Process

Q: Can I withdraw my paper?

A: Yes, papers can be withdrawn at any time using HotCRP.

Q: The authors have provided a URL to supplemental material. I would like to see the material but I worry they will snoop my IP address and learn my identity. What should I do?

A: Contact the Program Co-Chairs, who will download the material on your behalf and make it available to you.

Q: If I am assigned a paper for which I feel I am not an expert, how do I seek an outside review?

A: PC members should do their own reviews, not delegate them to someone else. Please contact the Program Co-Chairs, especially since additional reviewers might have a different set of conflicts of interest.

Q: What is the ESEC/FSE 2023 open science policy and how can I follow it?

A: Openness in science is key to fostering scientific progress via transparency, reproducibility, and replicability. Upon submission to the research track, authors are asked to:

  • make their data available to the program committee (via upload of supplemental material or a link to an anonymous repository) and provide instructions on how to access this data in the paper; or
  • include in the paper an explanation as to why this is not possible or desirable; and
  • indicate if they intend to make their data publicly available upon acceptance. This information should be provided in the submitted papers in a section named Data Availability after the Conclusion section. For more details, see the ESEC/FSE open science policy .

Q: How can I upload supplementary material via the HotCRP site and make it anonymous for double-anonymous review?

A: To conform to the double-anonymous policy, please include an anonymized URL. Code and data repositories may be exported to remove version control history, scrubbed of names in comments and metadata, and anonymously uploaded to a sharing site. Instructions are provided in the ESEC/FSE open science policy .

Double-Anonymous Reviewing (DAR)

Q: Why are you using double-anonymous reviewing?

A: Studies have shown that a reviewer’s attitude toward a submission may be affected, even unconsciously, by the identity of the authors.

Q: Do you really think DAR actually works? I suspect reviewers can often guess who the authors are anyway.

A: It is rare for authorship to be guessed correctly, even by expert reviewers, as detailed in this study .

Q: What exactly do I have to do to anonymize my paper?

A: Your job is not to make your identity undiscoverable but simply to make it possible for reviewers to evaluate your submission without having to know who you are: omit authors’ names from your title page, and when you cite your own work, refer to it in the third person. Also, be sure not to include any acknowledgements that would give away your identity. You should also avoid revealing the institutional affiliation of authors.

Q: I would like to provide supplementary material for consideration, e.g., the code of my implementation or proofs of theorems. How do I do this?

A: On the submission site, there will be an option to submit supplementary material along with your main paper. You can also share supplementary material in a private or publicly shared repository (preferred). This supplementary material should also be anonymized; it may be viewed by reviewers during the review period, so it should adhere to the same double-anonymous guidelines. See instructions on the ESEC/FSE open science policy .

Q: My submission is based on code available in a public repository. How do I deal with this?

A: Making your code publicly available is not incompatible with double-anonymous reviewing. You can create an anonymized version of the repository and include a new URL that points to the anonymized version of the repository, similar to how you would include supplementary materials to adhere to the Open Science policy. Authors wanting to share GitHub repositories may want to look into using https://anonymous.4open.science/ which is an open source tool that helps you to quickly double-anonymize your repository.

Q: I am building on my own past work on the WizWoz system. Do I need to rename this system in my paper for purposes of anonymity, so as to remove the implied connection between my authorship of past work on this system and my present submission?

A: Maybe. The core question is really whether the system is one that, once identified, automatically identifies the author(s) and/or the institution. If the system is widely available, and especially if it has a substantial body of contributors and has been out for a while, then these conditions may not hold (e.g., LLVM or HotSpot), because there would be considerable doubt about authorship. By contrast, a paper on a modification to a proprietary system (e.g., Visual C++, or a research project that has not open-sourced its code) implicitly reveals the identity of the authors or their institution. If naming your system essentially reveals your identity (or institution), then anonymize it. In your submission, point out that the system name has been anonymized. If you have any doubts, please contact the Program Co-Chairs.

Q: I am submitting a paper that extends my own work that previously appeared at a workshop. Should I anonymize any reference to that prior work?

A: No. But we recommend you do not use the same title for your ESEC/FSE submission, so that it is clearly distinguished from the prior paper. In general, there is rarely a good reason to anonymize a citation. When in doubt, contact the Program Co-Chairs.

Q: Am I allowed to post my (non-anonymized) paper on my web page or arXiv?

A: To facilitate double-anonymous reviewing, we recommend the authors to postpone publishing their submitted work on arXiv or similar sites until after the notification . If the authors have uploaded to arXiv or similar, they should avoid specifying that the manuscript was submitted to ESEC/FSE 2023.

Q: Can I give a talk about my work while it is under review? How do I handle social media?

A: We have developed guidelines, described here, to help everyone navigate in the same way the tension between the normal communication of scientific results, which double-anonymous reviewing should not impede, and actions that essentially force potential reviewers to learn the identity of the authors for a submission. Roughly speaking, you may (of course!) discuss work under submission, but you should not broadly advertise your work through media that is likely to reach your reviewers. We acknowledge there are grey areas and trade-offs; we cannot describe every possible scenario.

Things you may do:

  • Put your submission on your home page.
  • Discuss your work with anyone who is not on the review committees, or with people on the committees with whom you already have a conflict.
  • Present your work at professional meetings, job interviews, etc.
  • Submit work previously discussed at an informal workshop, previously posted on arXiv or a similar site, previously submitted to a conference not using double-anonymous reviewing, etc.

Things you should not do:

  • Contact members of the review committees about your work, or deliberately present your work where you expect them to be.
  • Publicize your work on major mailing lists used by the community (because potential reviewers likely read these lists).
  • Publicize your work on social media if wide public [re-]propagation is common (e.g., Twitter) and therefore likely to reach potential reviewers. For example, on Facebook, a post with a broad privacy setting (public or all friends) saying, “Whew, ESEC/FSE paper in, time to sleep” is okay, but one describing the work or giving its title is not appropriate. Alternatively, a post to a group including only the colleagues at your institution is fine.

Reviewers will not be asked to recuse themselves from reviewing your paper unless they feel you have gone out of your way to advertise your authorship information to them. If you are unsure about what constitutes “going out of your way”, please contact the Program Co-Chairs.

Q: Will the fact that ESEC/FSE is double-anonymous have an impact on handling conflicts of interest? A: Double-anonymous reviewing does not change the principle that reviewers should not review papers with which they have a conflict of interest, even if they do not immediately know who the authors are. Authors declare conflicts of interest when submitting their papers using the guidelines in the Call for Papers. Papers will not be assigned to reviewers who have a conflict. Note that you should not declare gratuitous conflicts of interest and the chairs will compare the conflicts declared by the authors with those declared by the reviewers. Papers abusing the system will be desk-rejected.

Q: What should I do if I learn the authors’ identity? What should I do if a prospective ESEC/FSE author contacts me and asks to visit my institution?

A: If you feel that the authors’ actions are largely aimed at ensuring that potential reviewers know their identity, contact the Program Co-Chairs. Otherwise, you should not treat double-anonymous reviewing differently from other reviewing. In particular, refrain from seeking out information on the authors’ identity, but if you discover it accidentally this will not automatically disqualify you as a reviewer. Use your best judgement.

Q: How do we handle potential conflicts of interest since I cannot see the author names?

A: The conference review system will ask that you identify conflicts of interest when you get an account on the submission system.

Q: How should I avoid learning the authors’ identity, if I am using web-search in the process of performing my review?

A: You should make a good-faith effort not to find the authors’ identity during the review period, but if you inadvertently do so, this does not disqualify you from reviewing the paper. As part of the good-faith effort, please turn off Google Scholar auto-notifications. Please do not use search engines with terms like the paper’s title or the name of a new system being discussed. If you need to search for related work you believe exists, do so after completing a preliminary review of the paper.

The above guidelines are partly based on the PLDI FAQ on double-anonymous reviewing and the ICSE 2022 guidelines on double-anonymous submissions.

Kelly Blincoe

Kelly Blincoe Program Co-Chair

University of auckland, new zealand.

Paolo Tonella

Paolo Tonella Program Co-Chair

Switzerland.

Rabe Abdalkareem

Rabe Abdalkareem

Omar al-mukhtar university.

Bram Adams

Queen's University, Kingston, Ontario

Iftekhar Ahmed

Iftekhar Ahmed

University of california at irvine, united states.

Shaukat Ali

Shaukat Ali

Simula research laboratory and oslo metropolitan university.

Chetan Arora

Chetan Arora

Monash university.

Alberto Bacchelli

Alberto Bacchelli

University of zurich.

Titus Barik

Titus Barik

Gabriele Bavota

Gabriele Bavota

Software institute, usi università della svizzera italiana.

Jonathan Bell

Jonathan Bell

Northeastern university.

Antonia Bertolino

Antonia Bertolino

Árpád Beszédes

Árpád Beszédes

Department of software engineering, university of szeged.

Barbora Buhnova

Barbora Buhnova

Masaryk university.

Nimrod Busany

Nimrod Busany

Accenture labs.

Joanna C. S. Santos

Joanna C. S. Santos

University of notre dame.

Haipeng Cai

Haipeng Cai

Washington state university.

Saikat Chakraborty

Saikat Chakraborty

Microsoft research.

Oscar Chaparro

Oscar Chaparro

William & mary.

Preetha Chatterjee

Preetha Chatterjee

Drexel university, usa.

Marsha Chechik

Marsha Chechik

University of toronto.

Junjie Chen

Junjie Chen

Tianjin university.

Sen Chen

College of Intelligence and Computing, Tianjin University

Tao Chen

University of Birmingham

United kingdom.

Jürgen Cito

Jürgen Cito

Anthony Cleve

Anthony Cleve

University of namur.

Maxime Cordy

Maxime Cordy

Snt, university of luxembourg.

Vittorio Cortellessa

Vittorio Cortellessa

Università dell'aquila, italy.

James C. Davis

James C. Davis

Purdue university.

Giovanni Denaro

Giovanni Denaro

University of milano-bicocca, italy.

Massimiliano Di Penta

Massimiliano Di Penta

University of sannio, italy.

Xiaoning Du

Xiaoning Du

Monash university, australia.

Mahdi Fahmideh

Mahdi Fahmideh

University of southern queensland.

Sarah Fakhoury

Sarah Fakhoury

Chunrong Fang

Chunrong Fang

Nanjing university.

Denae Ford

Thomas Fritz

Alessandro Garcia

Alessandro Garcia

Sepideh Ghanavati

Sepideh Ghanavati

University of maine.

Gouri Ginde (Deshpande)

Gouri Ginde (Deshpande)

University of calgary.

Rahul Gopinath

Rahul Gopinath

University of sydney.

Lars Grunske

Lars Grunske

Humboldt-universität zu berlin.

Dan Hao

Peking University

Mark Harman

Mark Harman

Meta platforms inc. and ucl.

Pinjia He

The Chinese University of Hong Kong, Shenzhen

Vincent J. Hellendoorn

Vincent J. Hellendoorn

Carnegie mellon university.

Fatemeh Hendijani Fard

Fatemeh Hendijani Fard

University of british columbia.

Gunel Jahangirova

Gunel Jahangirova

Università della svizzera italiana.

Yanjie Jiang

Yanjie Jiang

Gail Kaiser

Gail Kaiser

Columbia university.

Foutse Khomh

Foutse Khomh

Polytechnique montréal.

Moonzoo Kim

Moonzoo Kim

South korea.

Yunho Kim

Hanyang University

Jacob Krüger

Jacob Krüger

Eindhoven university of technology, netherlands.

Li Li

Beihang University

Yi Li

Nanyang Technological University

Yun Lin

Shanghai Jiao Tong University

Mario Linares-Vásquez

Mario Linares-Vásquez

Universidad de los andes.

Hui Liu

Beijing Institute of Technology

David Lo

School of Computing and Information Systems, Singapore Management University

Yiling Lou

Fudan University

Qinghua Lu

CSIRO’s Data61

Linghui Luo

Linghui Luo

Amazon web services.

Michael Lyu

Michael Lyu

The chinese university of hong kong.

Lei Ma

The University of Tokyo / University of Alberta

Fernanda Madeiral

Fernanda Madeiral

Vrije universiteit amsterdam.

Martina Maggio

Martina Maggio

Saarland university, germany / lund university, sweden.

Nash Mahmoud

Nash Mahmoud

Louisiana state university.

Sam Malek

Andrian Marcus

George mason university.

Leonardo Mariani

Leonardo Mariani

University of milano-bicocca.

Darko Marinov

Darko Marinov

University of illinois at urbana-champaign.

Zainab Masood

Zainab Masood

Prince sultan university, saudi arabia.

George Mathew

George Mathew

Shane McIntosh

Shane McIntosh

University of waterloo.

Sergey Mechtaev

Sergey Mechtaev

University college london.

Dimitris Mitropoulos

Dimitris Mitropoulos

University of athens.

André N. Meyer

André N. Meyer

Sarah Nadi

University of Alberta

Phuong T. Nguyen

Phuong T. Nguyen

University of l’aquila.

Ali Ouni

ETS Montreal, University of Quebec

Fabio Palomba

Fabio Palomba

University of salerno.

Rangeet Pan

Rangeet Pan

Ibm research.

Mike Papadakis

Mike Papadakis

University of luxembourg, luxembourg.

Dietmar Pfahl

Dietmar Pfahl

University of tartu.

Martin Pinzger

Martin Pinzger

Universität klagenfurt.

Denys Poshyvanyk

Denys Poshyvanyk

Chris Poskitt

Chris Poskitt

Singapore management university.

Michael Pradel

Michael Pradel

University of stuttgart.

Mukul Prasad

Mukul Prasad

Akond Rahman

Akond Rahman

Auburn university.

Ajitha Rajan

Ajitha Rajan

University of edinburgh.

Gema Rodríguez-Pérez

Gema Rodríguez-Pérez

University of british columbia (ubc).

Nico Rosner

Nico Rosner

Sukyoung Ryu

Sukyoung Ryu

Anita Sarma

Anita Sarma

Oregon state university.

Giuseppe Scanniello

Giuseppe Scanniello

Alexander Serebrenik

Alexander Serebrenik

Francisco Servant

Francisco Servant

Itis software, university of malaga.

David C. Shepherd

David C. Shepherd

Lin Shi

Beihang University, China

Jocelyn Simmonds

Jocelyn Simmonds

University of chile.

Ting Su

East China Normal University

Mary Sánchez-Gordón

Mary Sánchez-Gordón

Østfold university college.

Shin Hwei Tan

Shin Hwei Tan

Concordia university.

Xin Tan

Beihang University,

Yiming Tang

Yiming Tang

Rochester institute of technology.

Yutian Tang

Yutian Tang

University of glasgow.

Valerio Terragni

Valerio Terragni

Nikolaos Tsantalis

Nikolaos Tsantalis

Jason Tsay

Christos Tsigkanos

University of bern, switzerland.

Gias Uddin

University of Calgary, Canada

Bogdan Vasilescu

Bogdan Vasilescu

Melina Vidoni

Melina Vidoni

Australian national university.

Zhiyuan Wan

Zhiyuan Wan

Zhejiang university.

Junjie Wang

Junjie Wang

Institute of software, chinese academy of sciences.

Shaohua Wang

Shaohua Wang

Central university of finance and economics.

Shuai Wang

Hong Kong University of Science and Technology

Song Wang

York University

Lili Wei

McGill University

Shiyi Wei

University of Texas at Dallas

Mairieli Wessel

Mairieli Wessel

Radboud university.

Xin Xia

Huawei Technologies

Zhenchang Xing

Zhenchang Xing

Xiwei (Sherry) Xu

Xiwei (Sherry) Xu

Jason Minhui Xue

Jason Minhui Xue

Tao Yue

Andreas Zeller

Cispa helmholtz center for information security.

Chengyu Zhang

Chengyu Zhang

Lingming Zhang

Lingming Zhang

Mingxue Zhang

Mingxue Zhang

Chinese university of hong kong.

Qirun Zhang

Qirun Zhang

Georgia institute of technology.

Tao Zhang

Macau University of Science and Technology

Yuxia Zhang

Yuxia Zhang

Hao Zhong

Shurui Zhou

Andrea Zisman

Andrea Zisman

The open university.

Ying Zou

  • IEEE Xplore Digital Library
  • IEEE Standards
  • IEEE Spectrum

IEEE Xplore Logo

IEEE Announces 8 New Journals Coming in 2023

research papers 2023

CVF

Powered by:

Microsoft Azure

Sponsored by:

Amazon

research papers 2023

  • Board of Advisors
  • Our Partners
  • Terms of Use
  • Privacy Policy
  • People Are Talking About
  • Curated News
  • Mainstream Media Outlets
  • Key Climate Change Media Outlets
  • Newsletters
  • Misinformation
  • Greenwashing
  • THE INFLATION REDUCTION ACT OF 2022
  • Green New Deal
  • Carbon Tax/cap-and-trade
  • Social Cost Of Carbon
  • Executive Orders
  • Presidential Actions, Statements, Speeches, Press Briefings, And Fact Sheets
  • Regulations & Deregulations
  • Biden’s Climate Plan
  • Juliana V. U.s.
  • ENVIRONMENTAL DEGRADATION
  • SEVERE WEATHER
  • EXTREME HEAT
  • MENTAL HEALTH
  • Coronavirus
  • INFECTIOUS DISEASES
  • Carbon Pricing
  • Carbon Offsets
  • 2020 Election
  • Ipcc Report 2023
  • Utilities And Transmission
  • Biofuels, Biomass & Biogas
  • Tidal Power
  • Deforestation & Reforestation
  • Oceans As Sanctuary
  • Groundwater
  • Colorado River
  • OCEAN AS CARBON SINK
  • Ocean Warming & Acidification
  • Sea Level Rise & Flooding
  • Precipitation
  • Melting Ice & Snow
  • Water Pollution
  • Rainfall & Floods
  • Construction & Manufacturing
  • Infrastructure
  • Recreational Vehicles
  • School Buses
  • Charging Stations
  • SEA TO SOIL
  • Meat & Dairy
  • Top Sustainable Restaurants
  • Sustainable Eating
  • FEEDING MANY
  • Top 50 Books
  • Jobs In Clean Energy
  • Investments & Divestments
  • The Circular Economy
  • Heating And Cooling
  • Public Sector
  • Places Of Worship
  • Planning Tools
  • Landscaping
  • Indigenous People
  • Carbon Capture
  • Carbon Footprint
  • Greenwashing 101
  • Investments
  • Cleaning And Laundry
  • Home Office
  • Holidays And Celebrations
  • Building A Home
  • Health & Beauty
  • Clothing & Fashion
  • Travel & Transportation
  • In Your Community
  • Through Political Action
  • Juliana V. U.s. Timeline
  • Organizations For Gen-z
  • Books For Gen-z
  • Films For Gen-z
  • Climate Change By Numbers
  • JUST FOR KIDS
  • Just For Parents
  • Just For Schools
  • CLIMATE DISASTERS INSPIRED BY THE GREAT MASTERS
  • Visual Arts
  • Museums Galleries And Exhibitions
  • Photography
  • Infographics
  • Performing Arts
  • Organizations
  • 2017 and before
  • Board Of Advisors

Reports & Papers 2023

2023 REPORTS AND PAPERS

Reports and papers, 2023 reports & papers, the oil and gas industry in net zero transitions.

The Oil and Gas Industry in Net Zero Transitions

Whose ‘moment of truth’?

Oil producers face ‘moment of truth’ over green investment, iea warns, cop28 ‘moment of truth’ for oil industry, says energy boss, carbon capture and storage hopes are pipe dreams, for now, oil and gas industry needs to let go of carbon capture as solution to climate change, iea says, the lancet countdown on health and climate change.

The Lancet Countdown on health and climate change

Two grim reports on global climate efforts highlight increased fossil fuel subsidies, ill health

Global heat deaths could quadruple if action is not taken on climate change, study finds, health risks linked to climate change are getting worse, experts warn, climate change is putting the health of billions at risk, health experts say world needs to end fossil fuel use as new report finds a rise in climate-related mortality, climate change, fossil fuels hurting people’s health, says new global report, the fifth national climate assessment.

The Fifth National Climate Assessment

Fifth National Climate Assessment

Climate change is threatening american lives, white house report says, every region of the country is taking climate action. here’s how., how does climate change threaten where you live a region-by-region guide., climate impacts in the u.s. are ‘far-reaching and worsening,’ federal report finds, five takeaways from a sweeping report on climate change in the us, no place in the us is safe from the climate crisis, but a new report shows where it’s most severe, ‘every bit matters’: six key takeaways from the latest u.s. climate report, the toll of climate disasters is rising. but a u.s. report has good news, too., climate changes threatens every facet of u.s. society, federal report warns, national climate assessment: flooding and sea level rise, the 5th national climate assessment in 15 maps, the production gap report 2023.

The Production Gap Report 2023

Nations That Vowed to Halt Warming Are Expanding Fossil Fuels, Report Finds

Planned fossil fuel production vastly exceeds the world’s climate goals, ‘throwing humanity’s future into question’, unavoidable future increase in west antarctic ice-shelf melting over the twenty-first century, antarctica is melting and we all need to adapt, a trio of climate analyses show, rapid melting in west antarctica is ‘unavoidable,’ with potentially disastrous consequences for sea level rise, study finds, rapid ice melt in west antarctica now inevitable, research shows, sea-level rise: west antarctic ice shelf melt ‘unavoidable’, antarctica ice crunch time: scientists sound alarm on sheet melting, rapid antarctic melting looks certain, even if emissions goals are met, west antarctic ice sheet faces ‘unavoidable’ melting, a warning for sea level rise, assessing the size and uncertainty of remaining carbon budgets, why many scientists are now saying climate change is an all-out ‘emergency’, the 2023 state of the climate report: entering uncharted territory, world energy outlook 2023.

World Energy Outlook 2023

Analysis: Global CO2 emissions could peak as soon as 2023, IEA data reveals

World shift to clean energy is unstoppable, iea report says, observed increases in north atlantic tropical cyclone peak intensification rates, atlantic hurricanes are getting more dangerous, more quickly, atlantic hurricanes intensifying faster, more frequently, research finds, atlantic hurricanes are getting stronger, faster, study finds, electricity grids and secure energy transitions.

Electricity Grids and Secure Energy Transitions

Electric Grids Are a Hidden Weak Spot in World’s Climate Plans, Report Warns

The world’s power grids, 50 million miles’ worth, need a major overhaul, global electricity grid must be upgraded urgently to hit climate goals, says iea, stalled spending on electrical grids slows rollout of renewable energy, endangering climate goals, accelerating decarbonization in the united states.

Accelerating Decarbonization in the United States

One key step in the energy transition? No new gas lines.

Scientists lay out a sweeping roadmap for transitioning the us off fossil fuels, new report provides comprehensive plan to meet u.s. net-zero goals and ensure fair and equitable energy transition, urgent action to cut methane emissions from fossil fuel operations essential to achieve global climate targets.

Urgent action to cut methane emissions from fossil fuel operations essential to achieve global climate targets

‘Immediate’ cuts to methane from fossil fuel needed: IEA

Immediate methane cuts can prevent nearly a million premature deaths, iea says, strong el niño event will contribute to high food assistance needs through 2024, rising el niño intensity sparks global food security concerns, children displaced in a changing climate.

Children displaced in a changing climate

Extreme weather displaced 43m children in past six years, Unicef reports

Millions of children are displaced due to extreme weather events. climate change will make it worse, net zero roadmap: a global pathway to keep the 1.5 °c goal in reach.

Net Zero Roadmap: A Global Pathway to Keep the 1.5 °C Goal in Reach

New report has terrific news for the climate

Iea says ‘unprecedented’ clean energy surge has kept key warming target alive, the iea’s latest report may have just saved the world, in praise of the iea, 7 charts on the good, the bad, and the ugly of the energy transition, prevalence and predictors of wind energy opposition in north america, it’s too easy to block a wind farm in america, the 9th national risk assessment: the insurance issue.

The 9th National Risk Assessment: The Insurance Issue

How climate change is impacting home insurance premiums

Us home insurance ‘bubble’ closer to popping as climate risks mount, new study warns of ‘climate insurance bubble.’ is that driving costs up in florida, homeowners face rising insurance rates as climate change makes wildfires, storms more common, homes in parts of the u.s. are “essentially uninsurable” due to rising climate change risks, 39 million properties are significantly overvalued due to artificially suppressed home insurance costs, the sustainability trends report 2023, 1 big thing: china says no fossil fuel “phaseout” at cop28, assessing the u.s. climate in august 2023.

Assessing the U.S. Climate in August 2023

U.S. has seen a record number of weather disasters this year. It’s only September.

Noaa: 2023 worst year on record for billion-dollar disasters, u.s. sets record for billion-dollar weather disasters in a year — with 4 months still to go, 2023 worst year on record for billion-dollar climate disasters, noaa says, summer 2023 broke dozens of all-time monthly heat records, us sets new record for billion-dollar climate disasters in single year, 15 billion-dollar weather disasters hit the us this year, a record pace, noaa says, technical dialogue of the first global stocktake. synthesis report by the co-facilitators on the technical dialogue, climate report card says countries are trying, but urgently need improvement, unlock the endangered species act to address ghg emissions, scientists were sure climate change was bad for polar bears. now they know how bad., the global drivers of chronic coastal flood hazards under sea-level rise.

The Global Drivers of Chronic Coastal Flood Hazards Under Sea-Level Rise

Rapid increase in the risk of heat-related mortality

Risk of heat-related deaths has ‘increased rapidly’ over past 20 years, short-term excess mortality following tropical cyclones in the united states.

Short-term excess mortality following tropical cyclones in the United States

U.S. hurricane deaths concentrated in vulnerable counties, research finds

New study finds far more hurricane-related deaths in us, especially among poor and vulnerable, hurricanes have become deadlier in recent decades, study shows, investing in american energy.

Investing in American Energy

Biden touts Inflation Reduction Act on first anniversary

Celebrating the investment reverberation act, why john podesta thinks the inflation reduction act is the next obamacare, first on cnn: some of america’s poorest communities are landing clean energy projects worth billions, the ira turned one. what’s happened since and what’s next, guest post: how the inflation reduction act narrows the gap to us climate goals, the ira turns 1. many democrats are already talking about the next climate law., biden’s inflation reduction act spurs historic climate action, how to decarbonize your home with the inflation reduction act, a prosperous year for the inflation reduction act, investing in america, the new climate law is upending the solar landscape, i turned my house into a zero-carbon utopia, clean economy works | ira one-year review, biden, yellen lead blitz to celebrate inflation reduction act, remarks by secretary of the treasury janet l. yellen on the economy ahead of inflation reduction act anniversary in las vegas, nevada, how the inflation reduction act has reshaped the u.s.—and the world, green investment boom and electric car sales: six key things about biden’s climate bill, one year of our clean energy boom, emissions and energy impacts of the inflation reduction act, extreme heat in north america, europe and china in july 2023 made much more likely by climate change, heat waves in u.s., europe ‘virtually impossible’ without climate change, study finds, some july heat: ‘virtually impossible’ without climate change, analysis finds, report: record heat “virtually impossible” without climate change, taking stock 2023: us emissions projections after the inflation reduction act.

Taking Stock 2023: US Emissions Projections after the Inflation Reduction Act

How Biden’s climate law will — and won’t — transform America

Deglaciation of northwestern greenland during marine isotope stage 11, ancient soil shows part of greenland was ice-free — and could soon melt again, scientists say, a multimillion-year-old record of greenland vegetation and glacial history preserved in sediment beneath 1.4 km of ice at camp century, turning climate commitments into results.

Turning Climate Commitments Into Results

Chart: The US can’t meet its climate goals unless states step up

The 8th national risk assessment: the precipitation problem, the places in the u.s. most at risk for extreme rainfall, drift of earth’s pole confirms groundwater depletion as a significant contributor to global sea level rise 1993–2010, something was messing with earth’s axis. the answer has to do with us., humans pump so much groundwater that earth’s axis has shifted, study finds, we’ve changed earth’s spin by pumping groundwater, humanity’s groundwater pumping has altered earth’s tilt, pace of progress – electrifying everything at the rate required to meet our climate goals.

Pace of Progress – Electrifying everything at the rate required to meet our climate goals

Wildfire Weather: Analyzing the 50-year shift across America

Wildfire Weather: Analyzing the 50-year shift across America

Air quality hits hazardous levels Wednesday from Canadian wildfire smoke

As smoke darkens the sky, the future becomes clear, we can see clearly now, safe and just earth system boundaries, earth is ‘really quite sick now’ and in danger zone in nearly all ecological ways, study says, if climate goals are meant to protect us from ‘significant harm,’ then they aren’t good enough, scientists say, ‘safe and just’ climate boundary has already been breached, says contested study, earth’s health failing in seven out of eight key measures, say scientists, humans have blown past key limits for earth’s stability, scientists say, climate and readiness: understanding climate vulnerability of u.s. joint force readiness.

Climate and Readiness: Understanding Climate Vulnerability of U.S. Joint Force Readiness

Military must focus on short- and long-term challenges of climate change, report finds

U.s. military sees growing threat in thawing permafrost, time to pay the piper: fossil fuel companies’ reparations for climate damages, fossil fuel firms owe climate reparations of $209bn a year, says study, persistent effect of el niño on global economic growth, el niño is getting stronger. that could cost the global economy trillions., el ninos are far costlier than once thought, in the trillions, study says — and one’s brewing now, el niño could cost the global economy $3 trillion, the weight of new york city: possible contributions to subsidence from anthropogenic sources, sea level rise in new york city, new york’s skyscrapers are causing it to sink – what can be done about it, new york city is sinking due to its million-plus buildings, study says, new york city is sinking. it’s far from alone, emissions from oil and gas operations in net zero transitions, the (relatively) cheap opportunity to cut oil and gas emissions, abyssal ocean overturning slowdown and warming driven by antarctic meltwater, truly ‘uncharted territory.’, melting antarctic ice predicted to cause rapid slowdown of deep ocean current by 2050, global economy’s “speed limit” set to fall to three-decade low, economists would like a word, world bank warns of ‘lost decade’ for global economic potential, the minderoo-monaco commission on plastics and human health, every stage of plastic production and use is harming human health: report, ar6 synthesis report climate change 2023.

AR6 Synthesis Report Climate Change 2023

Corporate interests ‘watered down’ the latest IPCC climate report, investigations find

Carbon brief’s definitive guide to the entire ipcc sixth assessment cycle, why optimism can’t fix our climate politics, q&a: ipcc wraps up its most in-depth assessment of climate change, climate change is speeding toward catastrophe. the next decade is crucial, u.n. panel says, a clear message from science, scientists deliver ‘final warning’ on climate crisis: act now or it’s too late, ‘it can be done. it must be done’: ipcc delivers definitive report on climate change, and where to now, ipcc report: climate solutions exist, but humanity has to break from the status quo and embrace innovation, world is on brink of catastrophic warming, u.n. climate change report says, summary report, 13–19 march 2023, climate change will impact everything everywhere all at once, latest ipcc report demonstrates urgency & opportunity of reaching net zero, the latest ipcc report: what is it and why does it matter, world energy transitions outlook 2023: 1.5°c pathway, energy agency chief warns transition to renewables is way off track, issues warning on stranded assets, report: renewable energy growth falls short of climate goal, global energy transition investments must quadruple to $5t to reach climate targets: irena, investments in renewable energies must quadruple to meet climate target -irena, economic report of the president.

Economic Report of the President

Climate change could spur severe economic losses, Biden administration says

The impact of climate change on u.s. subnational economies, long island fourth nationally in potential risks due to climate change, moody’s report says, need to rethink retirement these areas face the biggest climate-change risk., here are the u.s. cities most vulnerable to climate change, according to moody’s, nyc, li among metropolitan areas most likely to feel negative impacts of climate change, study says, which u.s. cities will fare best in a warming world — and which will be hit hardest, preliminary us greenhouse gas emissions estimates for 2022, u.s. emissions rose slightly in 2022. they need to be falling rapidly., global glacier change in the 21st century: every increase in temperature matters, half of earth’s glaciers could melt even if key warming goal is met, study says, the breakthrough effect: how to trigger a cascade of tipping points to accelerate the net zero transition, the good news about climate tipping points, population attributable fraction of gas stoves and childhood asthma in the united states, why gas stoves actually matter, gas stove talk gets weird, are gas stoves really dangerous what we know about the science, may the best stove win, chef alison roman loves her induction stove. twitter has so many questions, what the right’s gas stove freakout was really about, push to phase out gas stoves over health concerns met with online anger, about that gas stove, u.s. regulators hinted at a possible ban on gas stoves. the debate boiled over, induction cooktops and ranges are so good you may not miss your gas appliance, the gas stove regulation uproar, explained, are gas stoves unsafe here’s what to know about the gas vs. induction debate, u.s. agency examines secret pollution from gas stoves, us safety agency to consider ban on gas stoves amid health fears, how michelin 3-star chef eric ripert designed his own home kitchen.

Contact Us | Privacy Policy | Terms and Conditions | Copyright ©2024 Climate Change Resources Inc. All rights reserved. Climate Change Resources, Inc. is a 501(c)3 non-profit organization, registered in NYS.

As an Amazon Associate CCR earns a small commission from qualifying purchases made through the links on the site.

ICDE 2023

  • Important Dates
  • Organizing Committee
  • Research Program Committee
  • Industry and Applications Track Program Committee
  • Demonstration Track Program Committee
  • Diversity and Inclusion
  • Code of conduct
  • Sponsorship Opportunities
  • Research Papers Track
  • Special Track
  • Industry and Applications Track
  • Workshops Track
  • Tutorial Proposals Track
  • Demonstration Track
  • TKDE Poster Session Track
  • Ph.D. Symposium
  • Program Overview
  • Detailed Program
  • Research Track
  • Tutorial Track
  • Ph.D. Symposium Track
  • TKDE Poster Track
  • Diversity Program
  • ASTRIDE 2023
  • HardBD & Active 2023
  • Format and Registration
  • Student Travel Awards
  • Visa & Travel Info
  • Venue Information

S13

IEEE ICDE 2023 CALL FOR RESEARCH PAPERS

Topics of interest

We invite the submission of original research contributions in the following areas.  

  • AI for Database Systems
  • Benchmarking, Performance Modeling, Tuning, and Testing
  • Cloud Data Management
  • Crowdsourcing
  • Data Mining and Knowledge Discovery
  • Data Models, Semantics, Query languages
  • Data Stream Systems and Edge Computing
  • Data Visualization and Interactive Data Exploration
  • Database Security and Privacy
  • Database technology for AI
  • Database technology for Blockchains
  • Distributed, Parallel and P2P Data Management
  • Explainability, Fairness, and Trust in Data Systems and Analysis
  • Graphs, Networks, and Semistructured Data
  • Information Integration and Data Quality 
  • IoT Data Management
  • Modern Hardware and In-Memory Database Systems
  • Query Processing, Indexing, and Optimization
  • Spatial Databases and Temporal Databases
  • Text, Semi-Structured Data, IR, Image, and Multimedia databases
  • Uncertain, Probabilistic, and Approximate Databases
  • Very Large Data Science Applications/pipelines 
  • Workflows, Scientific Data Management

We also welcome any original contributions that may cross the boundaries among areas or point in other novel directions of interest to the database research community.

IMPORTANT DATES

IEEE ICDE 2023 will have three rounds of research paper submissions with each round involving two rounds of reviewing to allow for revisions. Notification dates are approximate.

All deadlines are 11:59PM PST.

First Round:

  • Submission due: April 25, 2022 (Monday)
  • Notification to authors (Accept/Revise/Reject): June 25, 2022 (Saturday)
  • Revisions due:  July 25, 2022 (Monday)
  • Final Notification to authors (Accept/Reject): August 25, 2022 (Thursday)
  • Camera-ready copy due: September 14, 2022 (Wednesday)

Second Round:

  • Submission due: July  8, 2022 (Friday)
  • Notification to authors (Accept/Revise/Reject): September 8, 2022 (Thursday)
  • Revisions due: October 8, 2022 (Saturday)
  • Notification to authors (Accept/Reject): November 8, 2022 (Tuesday)
  • Camera-ready copy due: December 12, 2022 (Monday)

Third Round:

  • Submission due: October  8, 2022 (Saturday)
  • Notification to authors (Accept/Revise/Reject): December 8, 2022 (Thursday)
  • Revisions due: January 8, 2023 (Sunday)
  • Notification to authors (Accept/Reject): February 8, 2023 (Wednesday)
  • Camera-ready copy due: February  28, 2023 (Tuesday)

The submission website is https://cmt3.research.microsoft.com/ICDE2023 . The site will be open for submissions at least two weeks before the deadline.

NOTES ON RESEARCH PAPERS

Manuscripts must be prepared in accordance with the IEEE format available at https://www.ieee.org/conferences_events/conferences/publishing/templates.html

Research papers must not exceed 12 pages for the main text plus up to 4 pages for the bibliography. No appendix is allowed. Only electronic submissions in PDF format will be considered. Papers that do not follow the guidelines or are not within the scope of topics relevant to ICDE will be desk rejected. 

A paper submitted to ICDE 2023 cannot be under review for any other conference or journal during the entire time it is considered for ICDE 2023, and it must be substantially different from any previously published work. Submissions will be reviewed in a single-blind manner. Papers rejected from ICDE 2023 during previous rounds cannot be resubmitted in future rounds of ICDE 2023. 

The best papers (as judged by the ICDE 2023 PC) will be selected for extended versions to be published in the IEEE Transactions on Knowledge and Data Engineering (TKDE). IEEE reserves the right to exclude a paper from distribution after the conference (e.g., removal from IEEE Xplore) if none of the authors attends the conference to present their paper.

REVIEWING PROCESS

Review Quality:  ICDE 2023  papers will be stringently reviewed with at least 3 reviews per paper. The review process will be coordinated by the meta reviewer resulting in the decisions to accept, reject, or revision. A constructive meta review will be provided based on the discussions about the paper. Paper revisions will go through an additional round of reviews before a final decision is made to accept or reject the paper.   

Revisions: Papers will be invited to submit a revised version of their paper if the PC believes the papers can reasonably be improved within the revision time frame. Authors will have one month to prepare their revisions. The revision process is intended to be a constructive partnership between reviewers and authors. To this end, reviewers will be instructed to request revisions with specific requests. 

Number of reviews: All papers will receive at least three reviews. 

INCLUSION AND DIVERSITY IN WRITING

We value Diversity and Inclusion in our community and profession. Both are important in our writing as well. Be mindful in your writing of not using language or examples that further the marginalization, stereotyping, or erasure of any group of people, especially historically marginalized and/or under-represented groups (URGs) in computing. Also be vigilant and guard against unintentionally exclusionary examples. Reviewers will be empowered to monitor and demand changes if such issues arise. Going further, also consider actively raising the representation of URGs in your writing. Diversity of representation in writing is a simple but visible avenue to celebrate and ultimately help improve our community’s diversity.

CONFLICTS OF INTEREST

During submission of a research paper, the submission site will request information about Conflicts of Interest (COI) of the paper’s authors with program committee (PC) members. It is the full responsibility of all authors of a paper to identify all (and only)  PC members with potential COIs as per the definition provided on the submission site. Papers with incorrect or incomplete COI information as of the submission closing time are subject to immediate rejection.

Definition of Conflict of Interest (COI):

A paper author has a COI with a PC member when and only when one or more of the following conditions hold:

  • The PC member is a co-author of the paper.
  • The PC member has been a co-worker in the same company or university within the past two years.
  • The PC member has been a collaborator within the past two years.
  • The PC member is or was the author’s primary thesis advisor, no matter how long ago.
  • The author is or was the PC member’s primary thesis advisor, no matter how long ago.
  • The PC member is a relative or a close personal friend of the author.

Authors need to submit their complete set of COIs for the paper to be considered for review.

Event Booking

VLDB 2023: Accepted Papers

  • Venue: Leibniz Universität Hannover
  • Registration
  • Visa Support
  • Instructions for Authors
  • Explore Hannover
  • How to get there
  • Accomodation
  • Promote RE'23
  • Complete Program
  • Your Program
  • Requirements Engineering 2023
  • Research Papers
  • RE@Next! Papers
  • Industrial Innovation Papers
  • Posters and Tool Demos
  • Doctoral Symposium
  • Journal-First
  • Student Volunteers
  • Requirements Engineering 2023 Committees
  • Organizing Committee
  • Track Committees
  • Contributors
  • People Index
  • Requirements Engineering 2024
  • Requirements Engineering 2022
  • Requirements Engineering 2021

Research Papers Requirements Engineering 2023

Accepted papers, submission instructions, formatting instructions, submission q&a, call for papers.

RE’23 welcomes original research papers focusing on traditional areas of requirements engineering, as well as new ideas which challenge the boundaries of the area. The research paper track is the main track for long-form new research, including new technical solutions, scientific evaluations, and perspective papers.

Program Display Configuration

Wed 6 sep displayed time zone: amsterdam, berlin, bern, rome, stockholm, vienna change, thu 7 sep displayed time zone: amsterdam, berlin, bern, rome, stockholm, vienna change, fri 8 sep displayed time zone: amsterdam, berlin, bern, rome, stockholm, vienna change.

RE’23 welcomes original papers focusing on the traditional RE topics; however, this year’s edition would like to also invite submissions on the theme: “Redefining RE: Challenging RE Perceptions, Boundaries, and Topics”. With this theme, the RE’23 organizers ask the RE community to challenge their notion of what is and is not included as part of requirements engineering, and how these boundaries affect successful buy-in and application of RE research in industry. With this theme, we encourage submissions to:

  • Challenge the boundaries of requirements engineering : given the changing landscape and paradigms of software and system developing, including the pervasiveness of agile methods and the rise in AI and machine learning, as well as cyber-physical and embedded systems, what sorts of problems and topics will be in the scope of RE in the years to come? What topics are out of scope? How do we maintain our identity as a community and continue to utilize our expertise while adapting to new needs and ways of working?
  • Propose new theories, topics and methods, or a revision of existing RE topics : these may be needed to address the changing landscape of software and systems development. This may include new or revised theories, novel topics, new collaborations, increased transdisciplinarity, new tactics or topics for education and for revising the industrial perception of requirements engineering.

Suggested Paper Topics

Although we are interested in discussing and revising the boundaries of RE, the RE conference typically welcomes papers which focus on the following topics. This list is not intended to be complete, but to give potential authors guidance on whether the RE’23 is a good fit for their paper topic. Topics of interest include, but are not limited to:

  • Requirements Elicitation and Prioritization
  • Requirements Analysis and Specification
  • Requirements Verification and Validation
  • Requirements Management
  • Pragmatic Requirements Engineering
  • Theoretical Requirements Foundations
  • Large-scale Requirements Engineering
  • Agile Requirements Engineering
  • Requirements Engineering and Continuous Integration/Continuous Delivery/DevOps
  • Requirements, Configuration, Variability and Product Lines
  • Requirements and Adaptive Systems
  • Requirements Traceability and Dependencies
  • Requirements Engineering and Human-Computer Interaction (HCI)
  • Requirements in Open-Source Software Projects
  • Requirements and Issue Tracking
  • Requirements and Architecture
  • Quality Engineering
  • Product Management and Release Planning
  • Global Requirements Engineering
  • Online Feedback and User Review Analysis
  • Domain-specific Requirements Engineering
  • Requirements Engineering for Societal Challenges
  • Requirements Engineering for Artificial Intelligence
  • Requirements in System Engineering/System Science
  • Requirements Engineering Education and Training
  • Empirical Studies, Measurements and Prediction

Categories for Research Papers

The RE’23 Research Track invites original submissions of research papers in three categories: Technical Solution, Scientific Evaluation, and Perspective.

  • Technical Solution papers present solutions for requirements-related problems that are novel or significantly improve on existing solutions. This includes new algorithms or theory, novel tools, modeling languages, infrastructures, or other technologies. All requirements-related activities, such as elicitation, prioritization, or analysis are in scope. These papers are mainly evaluated with regard to problem significance, novelty in comparison with existing work, clarity of presentation, technical soundness, and evidence for its benefits.
  • Scientific Evaluation papers evaluate existing problem situations or real-world artifacts, or they validate or refute proposed solutions by scientific means. This includes experiments, case studies, and surveys reporting qualitative and quantitative data and findings. The papers are mainly evaluated with regard to the soundness of research questions and appropriateness and correctness of study design, data analysis, and threats to validity. Replications are welcome. Lessons learned can be particularly important to complement other empirical results.
  • Perspective papers explore the history, successes, and challenges of requirements related practices and research agendas, and outline research roadmaps for the future. Literature reviews are also included in this category and must distill novel knowledge, present new insights, and not be a mere compilation. These papers are evaluated based on the insights they offer to the reader and the corresponding arguments, and on their potential to shape future research.

Prospective authors who are unsure about relevance are welcome to contact the PC chairs for a preliminary assessment of the scope.

Review Criteria

Papers submitted to the RE’23 Research Track will be evaluated based on the following criteria:

  • Soundness : The extent to which the paper’s contributions and/or innovations address its research questions and are supported by rigorous application of appropriate research methods.
  • Significance : The extent to which the paper’s contributions can impact the field of requirements engineering, and under which assumptions (if any).
  • Novelty : The extent to which the paper's contributions are sufficiently original with respect to the state-of-the-art. Note that Scientific evaluation papers do not have to introduce a novel solution.
  • Related Work : The extent to which the paper's contributions are appropriately compared to related work.
  • Verifiability and Transparency : The extent to which the paper includes sufficient information to understand how an innovation works; to understand how data was obtained, analyzed, and interpreted; and how the paper supports independent verification or replication of the paper’s claimed contributions. Note that perspective papers do not always require full verifiability and transparency.
  • Presentation : The extent to which the paper’s quality of writing meets the high standards of the RE conference series, including clear descriptions, as well as adequate use of the English language, absence of major ambiguity, clearly readable figures and tables, and adherence to the provided formatting instructions . The RE 2023 Research Track follows a double-blind review process .

Reviewers will carefully consider all of these criteria during the review process, and authors should take great care in clearly addressing them. The authors should clearly explain the claimed contributions, and how they are sound, significant, novel, and verifiable, as described above.

Call for Contributions as PDF Call for Papers as PDF

Papers must be submitted electronically in PDF format via the RE’23 EasyChair system. Select the RE’23 Research Track for your submission.

In order to guide the reviewing process, all authors who intend to submit a paper must first submit the title and abstract. Abstracts should describe explicit coverage of context, objectives, methods, and results and conclusions, and should not exceed 200 words.

Papers must not exceed 10 pages for the main body and up to 2 additional pages for the references. Submissions must be written in English and formatted according to the IEEE formatting instructions . Submissions must be double-blinded in conformance with the instructions below.

Please note: Papers that exceed the length specification, are not formatted correctly, or are not properly double-blinded will be desk-rejected without review. Only full paper submissions will be peer-reviewed. Abstract-only submissions will be discarded without further notice after the submission deadline. Accepted papers may require editing for clarity prior to publication and presentation. They will appear in the IEEE Digital Library.

Instructions for the Double-Blind Review Process

The RE’23 Research track will use a double-blind reviewing process. The goal of double-blind reviewing is to ensure that the reviewers can read and review your paper without having to know who any of the authors are, and hence avoid related bias. Of course, authors are allowed and encouraged to submit papers that build on their previously published work.

In order to prepare your submission for double-blind reviewing, please follow the instructions given below.

  • Omit all names and affiliations of authors from the title page, but keep sufficient space to re-introduce them in the final version should the paper be accepted.
  • Do not include any acknowledgements that might disclose your identity. Leave space in your submission to add such acknowledgements when the paper has been accepted.
  • Refer to your own work in the third person, as you would normally do with the work of others. You should not change the names of your own tools, approaches, or systems, since this would clearly compromise the review process; it would also violate the constraint that “no change is made to any technical details of the work”. Instead, refer to the authorship or provenance of tools, approaches, or systems in the third person, so that it is credible that another author could have written your paper. In particular, never blind references.
  • When providing supplementary material (e.g., tools, data repositories, source code, study protocols), do this via a website that does not disclose your identity. Please refer to the Open Science Policy in the Call for Papers with guidelines on how to anonymize such content.
  • Adhere to instruction 3 when citing previously published own work.
  • Remove identification metadata from the PDF file before submission (in Adobe Acrobat Reader, you can check their presence with File Properties, or Ctrl-D).

Important Policy Announcements

Papers submitted to RE’23 must be original. They will be reviewed under the assumption that they do not contain plagiarized material and have not been published nor submitted for review elsewhere while under consideration for RE’23.

RE’23 follows the IEEE policies for cases of double submission and plagiarism

The format of your paper must strictly adhere to the IEEEtran Proceedings Format.

LaTeX users: please use the LaTeX class file IEEEtran v1.8 and the following configuration (without option ‘compsoc’ or ‘compsocconf’):

\documentclass[conference]{IEEEtran}

Word users: please use this Word template (official IEEE Templates page for more information).

Please make sure that your submission

  • does not exceed the respective page limit specified in the track call
  • is in PDF format,
  • is in letter page size,
  • does not have page numbers,
  • has all fonts embedded in the PDF file,
  • uses only scalable font types (like Type 1, TrueType) --- bit-mapped font types (like Type 3) are not acceptable,
  • has all figures embedded in vector graphics (if not possible, use a high-resolution bitmap format of at least 300 dpi; do not use JPG, but a lossless format like PNG or GIF),
  • has all text in figures and tables large enough and readable when printed,
  • has a caption for every figure or table,
  • has the title and all headings properly capitalized
  • has no orphans and widows (cf. Section Help), and
  • does not use footnote references in the abstract.

Empirical Studies and Sharing of Data

I am doing research with industry. What if I cannot share data from my research? We absolutely welcome research with industry, as it often conveys important lessons about requirements engineering in practice – and we perfectly understand that industry data may be subject to confidentiality issues or legal requirements. If you cannot share data, please state the reason in the submission form and the paper; a typical wording would be “The raw data obtained in this study cannot be shared because of confidentiality agreements”. Having said that, even sharing a subset of your data (for instance, the data used for figures and tables in the paper, an anonymized subset, or one that aggregates over the entire dataset), analysis procedures, or scripts, would be useful.

I am doing user studies. What if I cannot share data from my empirical study? We absolutely welcome user studies! However, we also perfectly understand that sharing raw data can be subject to constraints such as privacy issues. If you cannot share data, please state the reason in the submission form and the paper; a typical wording would be “The raw data obtained in this study cannot be shared because of privacy issues”. Having said that, even sharing a subset of your data (for instance, the data used for figures and tables in the paper, an anonymized subset, or one that aggregates over the entire dataset), analysis procedures, or scripts, would be useful.

I am doing qualitative research. What information should I include to help reviewers assess my research results and the readers use my results? Best practices for addressing the reliability and credibility of qualitative research suggest providing detailed arguments and rationale for qualitative approaches, procedures, and analyses. Therefore, authors are advised to provide as much transparency as possible into these details of their study. For example, clearly explain details and decisions such as 1) context of study, 2) the participant-selection process and the theoretical basis for selecting those participants, 3) collection of data or evidence from participants, and 4) data analysis methods, e.g., justify their choice theoretically and how they relate to the original research questions, and make explicit how the themes and concepts were identified from the data. Further, provide sufficient detail to bridge the gap between the interpretation of findings presented and the collected evidence by, for example, numbering quotations and labeling sources. Similar to replicability in quantitative research, transparency aims to ensure a study’s methods are available for inspection and interpretation. However, replicability or repeatability is not the goal, as qualitative methods are inherently interpretive and emphasize context. As a consequence, reporting qualitative research might require more space in the paper; authors should consider providing enough evidence for their claims while being mindful with the use of space.

Finally, when qualitative data is counted and used for quantitative methods, authors should report the technique and results in assessing rigour in data analysis procedures, such as inter-reliability tests or triangulation over different data sources or methods, and justify how they achieved rigour if no such methods were used.

I can make my data set / my tool available, but it may reveal my identity. What should I do? See this question under “double-anonymous submissions”, below.

Double-Blind Submissions

I previously published an earlier version of this work in a venue that doesn’t have double-anonymous. What should I do about acknowledging that previous work? If the work you are submitting for review has previously been published in a peer-reviewed venue or in a non-peer-reviewed venue (e.g., arXiv.org, or a departmental technical report), then it should be cited but in the third person so that it is not revealed that the cited work and the submitted paper share one or more authors.

Our submission makes use of work from a PhD or master’s thesis, dissertation, or report which has been published. Citing the dissertation might compromise anonymity. What should we do? It is perfectly OK to publish work arising from a PhD or master’s degree, and there is no need to cite it in a submission to the RE Research Track because prior dissertation publication does not compromise novelty. In the final post-review, camera-ready version of the paper, please do cite the dissertation to acknowledge its contribution, but in any submission to the RE Research Track, please refrain from citing the dissertation to increase anonymity.

You need not worry whether or not the dissertation has appeared. Your job is to ensure that your submission is readable and reviewable, without the reviewers needing to know the identities of the submission’s authors. You do not need to make it impossible for the reviewers to discover the authors’ identities. The referees will be trying hard not to discover the authors’ identity, so they will likely not be searching the web to check whether there is a dissertation related to this work.

What if we want to cite some unpublished work of our own (as motivation for example)? If the unpublished paper is an earlier version of the paper you want to submit to the RE Research Track and is currently under review, then you have to wait until your earlier version is through its review process before you can build on it with further submissions (this would be considered double-submission and violates plagiarism policies and procedures). Otherwise, if the unpublished work is not an earlier version of the proposed submission, then you should simply make it available on a website, for example, and cite it in the third person to preserve anonymity, as you are doing with other work.

Can I disseminate a non-anonymized version of my submitted work by discussing it with colleagues, giving talks, publishing it at ArXiV, etc.? You can discuss and present your work that is under submission at small meetings (e.g., job talks, visits to research labs, a Dagstuhl or Shonan meeting), but you should avoid broadly advertising it in a way that reaches the reviewers even if they are not searching for it. Therefore, the title of your submission must be different from preprints on ArXiV or similar sites. During review, you must not publicly use the submission title. Under these conditions, you are allowed to put your submission on your home page and present your work at small professional meetings.

What if we want to make available a tool, a data set, or some other resource, but it may reveal my identity? Please refer to the Open Science Policy of ICSE’23 for guidelines on how to anonymize such content. If that is impossible, place a warning next to the link that this may reveal your identity.

Jennifer Horkoff

Jennifer Horkoff PC Chair

Chalmers and the university of gothenburg.

Fabiano Dalpiaz

Fabiano Dalpiaz PC Chair

Utrecht university, netherlands.

Raian Ali

Carina Alves

Universidade federal de pernambuco.

Chetan Arora

Chetan Arora

Monash university.

Fatma Başak Aydemir

Fatma Başak Aydemir

Boğaziçi university.

Muneera Bano

Muneera Bano

Csiro's data61.

Nelly Bencomo

Nelly Bencomo

Durham university, united kingdom.

Dan Berry

University of Waterloo

Markus Borg

Markus Borg

Travis Breaux

Travis Breaux

Carnegie mellon university, united states.

Jaelson Castro

Jaelson Castro

Jane Cleland-Huang

Jane Cleland-Huang

University of notre dame.

Maya Daneva

Maya Daneva

University of twente.

Joerg Doerr

Joerg Doerr

Fraunhofer iese.

Michael Felderer

Michael Felderer

German aerospace center (dlr) & university of cologne.

Alessio Ferrari

Alessio Ferrari

Xavier Franch

Xavier Franch

Universitat politècnica de catalunya.

Sepideh Ghanavati

Sepideh Ghanavati

University of maine.

Martin Glinz

Martin Glinz

University of zurich, switzerland.

Miguel Goulao

Miguel Goulao

Nova-lincs, fct/unl.

Jin L.C. Guo

Jin L.C. Guo

Mcgill university.

Irit Hadar

University of Haifa

Patrick Heymans

Patrick Heymans

University of namur.

Emilio Insfran

Emilio Insfran

Universitat politècnica de valència, spain.

Haruhiko Kaiya

Haruhiko Kaiya

Kanagawa university.

Fitsum Kifetew

Fitsum Kifetew

Fondazione bruno kessler.

Eric Knauss

Eric Knauss

Chalmers | university of gothenburg.

Seok-Won Lee

Seok-Won Lee

Ajou university, south korea.

Julio Cesar Leite

Julio Cesar Leite

Federal university of bahia (ufba).

Emmanuel Letier

Emmanuel Letier

University college london.

micro-avatar

Beijing University of Technology

Sabrina Marczak

Sabrina Marczak

Daniel Mendez

Daniel Mendez

Blekinge institute of technology and fortiss.

Denisse Muñante

Denisse Muñante

Ensiie & samovar, john mylopoulos, university of ottawa.

Barbara Paech

Barbara Paech

Heidelberg university.

Anna Perini

Anna Perini

Dirk Riehle

Dirk Riehle

U of erlangen.

Mehrdad Sabetzadeh

Mehrdad Sabetzadeh

Norbert Seyff

Norbert Seyff

University of applied sciences and arts northwestern switzerland fhnw.

Paola Spoletini

Paola Spoletini

Kennesaw state university.

Colin C. Venter

Colin C. Venter

University of huddersfield.

Michael Vierhauser

Michael Vierhauser

University of innsbruck.

Andreas Vogelsang

Andreas Vogelsang

University of cologne.

Stefan Wagner

Stefan Wagner

Technical university of munich.

Yves Wautelet

Yves Wautelet

Rebekka Wohlrab

Rebekka Wohlrab

Chalmers university of technology.

Tao Yue

Beihang University

Jelena Zdravkovic

Jelena Zdravkovic

Stockholm university.

Liping Zhao

Liping Zhao

The university of manchester.

Didar Zowghi

Didar Zowghi

Csiro's data61 - university of technology sydney - university of new south wales.

IEEE Account

  • Change Username/Password
  • Update Address

Purchase Details

  • Payment Options
  • Order History
  • View Purchased Documents

Profile Information

  • Communications Preferences
  • Profession and Education
  • Technical Interests
  • US & Canada: +1 800 678 4333
  • Worldwide: +1 732 981 0060
  • Contact & Support
  • About IEEE Xplore
  • Accessibility
  • Terms of Use
  • Nondiscrimination Policy
  • Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

  • Contact Contact icon Contact
  • Training Training icon Training

cas logo

  • Portuguese, Brazil
  • Chinese, Simplified

Check the spelling in your query or search for a new term.

Site search accepts advanced operators to help refine your query. Learn more.

Top scientific breakthroughs and emerging trends for 2023

CAS Science Team

January 31, 2023

Breakthroughs-1920x1080

*Updated in January 2024*: 

While the article below was created at the end of 2022, it still has critical insight and emerging trends for the future of scientific R&D.  For the latest trends and breakthroughs, the scientists and experts at CAS recently published a new review on the landscape of scientific trends to watch in 2024: from AI, to emerging materials, our on-going battle against the undruggable proteins, sustainability trends, and more. Additionally, CAS teamed up with experts from  Lawrence Livermore National Lab ,  Oak Ridge National Lab , and  The Ohio State University , to reveal the top trends to watch in the year ahead.  If you weren't able to join us for the webinar, see the recording here  for the expert's take on the year ahead.  

As published in 2022:

The pace of innovation never slows, and the impact of these scientific breakthroughs will redefine the way we live, work, and connect with the world around us. From space exploration at the largest scale to diagnostics at the single-cell level, these breakthroughs will inspire innovators to push the boundaries of what is possible. 

Looking to stay ahead of emerging trends?  

Subscribe  to be the first to know when new insights are published.

A new era of space exploration

New Era of Space Exploration

Need to be reminded of how incredibly vast our universe is? The first ever photos from the James Webb Space Telescope are awe-inspiring. While this is the most technically advanced and powerful telescope ever created, the learnings about our universe will lead to future missions and exploration for generations ahead. Recently, the newest mission to the moon was launched as NASA’s Artemis Program which will pave the way for a future mission to Mars. This new era of space exploration will drive technological advancements in fields beyond astronautics and stimulate progress in real-world applications like materials, food science , agriculture, and even cosmetics.

A milestone in AI predictions

A Milestone in AI predictions

For decades, the scientific community has chased a greater understanding of relationships between protein functions and 3D structures. In July 2022, Deep Mind revealed that the folded 3D structure of a protein molecule can be predicted from its linear amino-acid sequence using AlphaFold2 , RoseTTAFold , and trRosettaX-Single algorithms. The algorithms’ predictions reduced the number of human proteins with unknown structural data from 4,800 to just 29. While there will always be challenges with AI, the ability to predict protein structures has implications across all life sciences. Key challenges in the future include modeling proteins with intrinsic disordered properties and those that change structures by post-translational modifications or to environmental conditions. Beyond protein modeling, AI advancements continue to reshape workflows and expand discovery capabilities across many industries and disciplines .

Developing trends in synthetic biology

Developing trends in synthetic biology

Synthetic biology has the potential to redefine synthetic pathways by using engineered biological systems (i.e., microorganisms, for which a large part of the genome or the entire genome has been designed or engineered) to manufacture a range of biomolecules and materials, such as therapeutics, flavors, fabrics, food, and fuels. For example, insulin could be produced without pig pancreas, leather without cows, and spider silk without spiders. The potential in life sciences alone is unbelievable, but when applied to manufacturing industries, synthetic biology could minimize future supply chain challenges, increase efficiency, and create new opportunities for biopolymers or alternative materials with more sustainable approaches. Today, teams use AI-based metabolic modeling, CRISPR tools, and synthetic genetic circuits to control metabolism, manipulate gene expression, and build pathways for bioproduction. As this discipline begins to cross over into multiple industries, the latest developments and emerging trends for metabolic control and engineering challenges are showcased in a 2022 Journal of Biotechnology article .

Single-cell metabolomics set to soar

Single Cell Metabolomics set to soar

While much progress has been made in genetic sequencing and mapping, genomics only tells us what a cell is capable of. To have a better understanding of cellular functions, proteomic and metabolomic approaches offer different angles for revealing molecular profiles and cellular pathways. Single-cell metabolomics gives a snapshot of the cellular metabolism within a biological system. The challenge is that metabolomes change rapidly, and sample preparation is critical to understand cell function. Collectively, a series of recent advancements in single-cell metabolomics (from open-sourced techniques, advanced AI algorithms, sample preparations, and new forms of mass spectrometry) demonstrates the ability to run detailed mass spectral analyses. This allows researchers to determine the metabolite population on a cell-by-cell basis, which would unlock enormous potential for diagnostics. In the future, this could lead to the ability to detect even a single cancerous cell in an organism. Combined with new biomarker detection methods , wearable medical devices and AI- assisted data analysis, this array of technologies will improve diagnosis and lives.

New catalysts enable greener fertilizer production

New catalysts enable greener fertilizer production

Every year, billions of people depend on fertilizers for the ongoing production of food, and reducing the carbon footprint and expenses in fertilizer production would reshape the impact agriculture has on emissions. The Haber-Bosch process for fertilizer production converts nitrogen and hydrogen to ammonia. To reduce energy requirements, researchers from Tokyo Tech have developed a noble-metal-free nitride catalyst containing a catalytically active transition metal (Ni) on a lanthanum nitride support that is stable in the presence of moisture. Since the catalyst doesn't contain ruthenium, it presents an inexpensive option for reducing the carbon footprint of ammonia production. The La-Al-N support, along with the active metals, such as nickel and cobalt (Ni, Co), produced NH3 at rates similar to conventional metal nitride catalysts. Learn more about sustainable fertilizer production in our latest article .

Advancements in RNA medicine

Crispr and RNA advancements

While the application of mRNA in COVID-19 vaccines garnered lots of attention, the real revolution of RNA technology is just beginning. Recently, a new multivalent nucleoside-modified mRNA flu vaccine was developed. This vaccine has the potential to build immune protection against any of the 20 known subtypes of influenza virus and protect against future outbreaks. Many rare genetic diseases are the next target for mRNA therapies, as they are often missing a vital protein and could be cured by replacing a healthy protein through mRNA therapy. In addition to mRNA therapies, the clinical pipeline has many RNA therapeutic candidates for multiple forms of cancers, and blood and lung diseases. RNA is highly targeted, versatile, and easily customized, which makes it applicable to a wide range of diseases. Learn more about the crowded clinical pipeline and the emerging trends in RNA technologies in our latest CAS Insight Report.

Rapid skeletal transformation

Rapid skeletal transformations

Within synthetic chemistry, the challenge of safely exchanging a single atom in a molecular framework or inserting and deleting single atoms from a molecular skeleton has been formidable. While many methods have been developed to functionalize molecules with peripheral substituents (such as C-H activation), one of the first methods to perform single-atom modifications on the skeletons of organic compounds was developed by Mark Levin’s group at the University of Chicago . This enables selective cleaving of the N–N bond of pyrazole and indazole cores to afford pyrimidines and quinazolines. Further development of skeletal editing methods would enable rapid diversification of commercially available molecules, which could lead to much faster discoveries of functional molecules and ideal drug candidates.

Advancing limb regeneration

Advancing Limb Regeneration

Limb loss is projected to affect over 3.6 million individuals per year by 2050. For the longest time, scientists believed the single biggest key to limb regeneration is the presence of nerves. However, work done by Dr. Muneoka and his team demonstrated the importance of mechanical load to digit regeneration in mammals and that the absence of a nerve does not inhibit regeneration. The advancement of limb regeneration was also achieved by researchers at Tufts University who have used acute multidrug delivery , via a wearable bioreactor, to successfully enable long-term limb regeneration in frogs. This early success could potentially lead to larger, more complex tissue re-engineering advances for humans, eventually benefiting military veterans, diabetics, and others impacted by amputation and trauma.

Nuclear fusion generates more net energy with ignition

photo of solar fusion

Nuclear fusion is the process that powers the sun and stars. For decades, the idea of replicating nuclear fusion on earth as a source of energy, in theory, could fulfill all the planet's future energy needs. The goal is to force light atoms to collide so forcefully that they fuse and release more energy than consumed. However, overcoming the electrical repulsion between the positive nuclei requires high temperatures and pressures. Once overcome, fusion releases large amounts of energy, which should also drive the fusion of nearby nuclei. Previous attempts to initiate fusion used strong magnetic fields and powerful lasers but had been unable to generate more energy than they consumed.

Researchers at Lawrence Livermore National Laboratory’s ignition facility reported that the team was able to initiate nuclear fusion, which created 3.15 megajoules of energy from the 2.05 megajoule laser used. While this is a monumental breakthrough, the reality of a functioning nuclear fusion plant powering our grid may still be decades in the making. There are significant implementation hurdles (scalability, plant safety, energy required to generate the laser, wasted by-products, etc.) that must be addressed before this comes to fruition. However, the breakthrough of igniting nuclear fusion is a major milestone that will pave the way for future progress to be built upon this achievement.

Gain new perspectives for faster progress directly to your inbox.

Related Insights

GettyImages-142021970_CAS-Insights-Hero-Image

16 billion reasons for hope: How biomarkers are...

Executive Summary

March 20, 2024

Sugar substitute on color background

Is aspartame safe? The landscape of artificial...

March 8, 2024

Different types of tablets in blister packaging and jars on the table

Idea in brief: battling antibiotic resistant infections

February 23, 2024

AIM logo Black

  • Last updated February 28, 2024
  • In AI Mysteries

Top 10 Research Papers Published by Google in 2023

  • Published on December 7, 2023
  • by Mohit Pandey

Top 10 Research Papers Published by Google in 2023

The year 2023 has witnessed some groundbreaking research shaping the future of AI technology. Google, which has been at the forefront of the AI revolution, has announced AI models with multiple capabilities. Along with the launch of innovative products, it has also released various research papers, offering a glimpse into the underlying technology.  

Most recently, Google has released its latest generative AI multimodal model called Gemini , that competes directly with GPT-4, and is already in discussions on social media. But this is not the best paper that Google published this year.

Here is the list of top 10 research papers published by Google in 2023.

Gemini: A Family of Highly Capable Multimodal Models

Topping the list is obviously Gemini, the paper behind the competitor multimodal model to OpenAI’s GPT-4.  Recently introduced, Gemini as a highly capable system jointly trained on image, audio, video, and text data. The primary goal is to create a model with robust generalist capabilities across modalities, coupled with state-of-the-art understanding and reasoning performance within each domain. 

Gemini 1.0, the inaugural version, is available in three sizes: Ultra for intricate tasks, Pro for scalable performance and deployment, and Nano for on-device applications. Each size is meticulously designed to cater to distinct computational limitations and application needs. Comprehensive evaluations of Gemini models encompass a diverse array of internal and external benchmarks, spanning language, coding, reasoning, and multimodal tasks. 

Subscribe to our Newsletter

Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy, your newsletter subscriptions are subject to aim privacy policy and terms and conditions..

PaLM-2 was the groundbreaking language model surpassing its predecessor, PaLM, boasting enhanced multilingual and reasoning capabilities while being more computationally efficient. Leveraging a Transformer-based architecture and a diverse set of training objectives, PaLM 2 demonstrates significantly improved performance on various downstream tasks, ensuring superior quality across different model sizes. 

Notably, PaLM 2 exhibits accelerated and resource-efficient inference, facilitating broader deployment and faster response times for more natural interactions. Its robust reasoning capabilities are highlighted by substantial advancements over PaLM in tasks such as BIG-Bench. 

PaLM-E: An Embodied Multimodal Language Model

PaLM-E represents a significant leap forward in the development of AI agents capable of interacting with the physical world. This paper describes LLMs equipped with a virtual embodiment, allowing it to perceive and manipulate its surroundings through sensors and actuators.

PaLM-E’s capabilities extend beyond simply understanding and generating text. It can navigate through a simulated environment, manipulate objects, and engage in simple conversations. This embodiment allows PaLM-E to learn and adapt to its environment in a more nuanced and realistic way compared to traditional LLMs.

The potential applications of PaLM-E are vast and diverse. It could be used to develop more realistic and engaging virtual assistants, robots that can assist with tasks in the real world, and even educational tools that allow users to learn through interactive simulations.

MusicLM: Generating Music from Text

Google was also into making music this year. MusicLM revolutionises music creation by enabling the generation of high-quality music from simple text descriptions. This paper introduces a system capable of composing music in various styles and genres based on user input, opening up new possibilities for musicians, composers, and anyone interested in exploring musical creativity.

MusicLM’s capabilities are based on a neural network trained on a massive dataset of music and text pairs. This allows the system to learn the complex relationships between text and musical elements, enabling it to generate music that is both faithful to the user’s description and musically sound.

Structure and Content-Guided Video Synthesis with Diffusion Models

This paper introduces a novel method for synthesising realistic videos using diffusion models. This approach allows for greater control over the content and structure of the generated videos, making it a valuable tool for video editing and animation.

Traditional video synthesis methods often lacked the ability to accurately control the details and structure of the generated videos. Diffusion models address this limitation by providing a framework for gradually introducing noise into a video and then denoising it to achieve the desired result. This allows for fine-grained control over the entire video generation process.

Lion: EvoLved Sign Momentum for Training Neural Networks

Lion introduces a new and efficient optimisation algorithm for training neural networks. This algorithm significantly improves the speed and accuracy of training, leading to better performance for various AI applications.

Traditional optimization algorithms used in training neural networks can be slow and inefficient. Lion addresses this issue by utilising a novel approach that analyses the dynamics of the training process and adapts accordingly. This allows Lion to optimise the learning process in a more effective way, leading to faster convergence and improved generalisation.

InstructPix2Pix: Learning to Follow Image Editing Instructions

This paper proposes a groundbreaking method for editing images based on text instructions. InstructPix2Pix enables users to modify images in a natural and intuitive way, opening up new possibilities for image editing and manipulation.

Traditional image editing tools require users to have specific technical skills and knowledge. InstructPix2Pix removes this barrier by allowing users to edit images simply by providing textual instructions. This user-friendly approach makes image editing accessible to a wider audience and simplifies the process for experienced users.

DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation

Large text-to-image models have limitations in mimicking subjects from a reference set and generating diverse renditions. To address this, Google Research and Boston University present a personalised approach. By fine-tuning the model with a few subject images, it learns to associate a unique identifier with the subject, enabling the synthesis of photorealistic images in different contexts.

The technique preserves key features while exploring tasks like recontextualization, view synthesis, and artistic rendering. A new dataset and evaluation protocol are provided for a subject-driven generation. Check out their GitHub repository here.

REVEAL: Retrieval-Augmented Visual-Language Pre-Training with Multi-Source Multimodal Knowledge Memory

The paper presents REVEAL, an end-to-end Retrieval-Augmented Visual Language Model. REVEAL encodes world knowledge into a large-scale memory and retrieves from it to answer knowledge-intensive queries. It consists of a memory, encoder, retriever, and generator. The memory encodes various multimodal knowledge sources, and the retriever finds relevant entries. 

The generator combines retrieved knowledge with input queries to generate outputs. REVEAL achieves state-of-the-art performance in visual question answering and image captioning, utilising diverse multimodal knowledge sources. The paper is submitted by members from the University of California, Los Angeles and Google Research. 

On Distillation of Guided Diffusion Models

Classifier-free guided diffusion models, widely used in image generation, suffer from computational inefficiency. Google, Stability AI and LMU Munich propose distilling these models into faster sampling models. The distilled model matches the output of combined conditional and unconditional models, achieving comparable image quality with fewer sampling steps. 

The approach is up to 256 times faster for pixel-space models and at least 10 times faster for latent-space models. It also proves effective in text-guided image editing and inpainting, requiring only 2-4 denoising steps for high-quality results.

Access all our open Survey & Awards Nomination forms in one place >>

Mohit Pandey

Mohit Pandey

Download our mobile app.

research papers 2023

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative ai skilling for enterprises, our customized corporate training program on generative ai provides a unique opportunity to empower, retain, and advance your talent., 3 ways to join our community, telegram group.

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox, recent stories.

Meet the 21-Year-Old Creator of Devika, the Indian Open Source Devin Alternative

Meet the 21-Year-Old Creator of Devika, the Indian Open Source Devin Alternative

research papers 2023

Once Upon a ‘Sora’ in Hollywood

research papers 2023

Apple Vision Pro is Just an Autonomous Vehicle on Steroids

At WWDC 2024, happening on June 10, 2024, Apple looks to unleash visionOS advancements.

Highlights of Microsoft Inspire 2022

Microsoft Shakes Up Leadership Roles as AI Gains Precedence

research papers 2023

Supercharge Your Data Science Career: Strategies for Solid Foundation

research papers 2023

MediaTek has the Tech to Power Any Devices in the World

It also powers the Primebook brand of affordable laptops. 

research papers 2023

Generative AI Steals the Spotlight at Adobe Summit 2024

research papers 2023

Finding Love Has Never Been Easier

Matchmaking gets a new makeover with AI and ML.

research papers 2023

Rakuten Releases Suite of RakutenAI-7B Models 

The models have been released to foster innovation and drive positive impact across various domains.

Our mission is to bring about better-informed and more conscious decisions about technology through authoritative, influential, and trustworthy journalism.

Shape the future of ai.

© Analytics India Magazine Pvt Ltd & AIM Media House LLC 2024

  • Terms of use
  • Privacy Policy

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.

research papers 2023

10 Research Papers Accepted to CVPR 2023

Share

Research from the department has been accepted to the 2023 Computer Vision and Pattern Recognition (CVPR) Conference . The annual event explores machine learning, artificial intelligence, and computer vision research and its applications. 

CoWs on Pasture: Baselines and Benchmarks for Language-Driven Zero-Shot Object Navigation Samir Yitzhak Gadre Columbia University , Mitchell Wortsman University of Washington , Gabriel Ilharco University of Washington , Ludwig Schmidt University of Washington , Shuran Song Columbia University

For robots to be generally useful, they must be able to find arbitrary objects described by people (i.e., be language-driven) even without expensive navigation training on in-domain data (i.e., perform zero-shot inference). We explore these capabilities in a unified setting: language-driven zero-shot object navigation (L-ZSON). Inspired by the recent success of open-vocabulary models for image classification, we investigate a straightforward framework, CLIP on Wheels (CoW), to adapt open-vocabulary models to this task without fine-tuning. To better evaluate L-ZSON, we introduce the Pasture benchmark, which considers finding uncommon objects, objects described by spatial and appearance attributes, and hidden objects described relative to visible objects. We conduct an in-depth empirical study by directly deploying 21 CoW baselines across Habitat, RoboTHOR, and Pasture. In total, we evaluate over 90k navigation episodes and find that (1) CoW baselines often struggle to leverage language descriptions, but are proficient at finding uncommon objects. (2) A simple CoW, with CLIP-based object localization and classical exploration — and no additional training — matches the navigation efficiency of a state-of-the-art ZSON method trained for 500M steps on Habitat MP3D data. This same CoW provides a 15.6 percentage point improvement in success over a state-of-the-art RoboTHOR ZSON model.

Towards Fast Adaptation of Pretrained Contrastive Models for Multi-Channel Video-Language Retrieval  Xudong Lin Columbia University , Simran Tiwari Columbia University , Shiyuan Huang Columbia University , Manling Li UIUC , Mike Zheng Shou National University of Singapore , Heng Ji UIUC , Shih-Fu Chang Columbia University

Multi-channel video-language retrieval require models to understand information from different channels (e.g. video+question, video+speech) to correctly link a video with a textual response or query. Fortunately, contrastive multimodal models are shown to be highly effective at aligning entities in images/videos and text, e.g., CLIP; text contrastive models are extensively studied recently for their strong ability of producing discriminative sentence embeddings, e.g., SimCSE. However, there is not a clear way to quickly adapt these two lines to multi-channel video-language retrieval with limited data and resources. In this paper, we identify a principled model design space with two axes: how to represent videos and how to fuse video and text information. Based on categorization of recent methods, we investigate the options of representing videos using continuous feature vectors or discrete text tokens; for the fusion method, we explore the use of a multimodal transformer or a pretrained contrastive text model. We extensively evaluate the four combinations on five video-language datasets. We surprisingly find that discrete text tokens coupled with a pretrained contrastive text model yields the best performance, which can even outperform state-of-the-art on the iVQA and How2QA datasets without additional training on millions of video-text data. Further analysis shows that this is because representing videos as text tokens captures the key visual information and text tokens are naturally aligned with text models that are strong retrievers after the contrastive pretraining process. All the empirical analysis establishes a solid foundation for future research on affordable and upgradable multimodal intelligence.

DiGeo: Discriminative Geometry-Aware Learning for Generalized Few-Shot Object Detection  Jiawei Ma Columbia University , Yulei Niu Columbia University , Jincheng Xu Columbia University , Shiyuan Huang Columbia University , Guangxing Han Columbia University , Shih-Fu Chang Columbia University

Generalized few-shot object detection aims to achieve precise detection on both base classes with abundant annotations and novel classes with limited training data. Existing approaches enhance few-shot generalization with the sacrifice of base-class performance, or maintain high precision in base-class detection with limited improvement in novel-class adaptation. In this paper, we point out the reason is insufficient Discriminative feature learning for all of the classes. As such, we propose a new training framework, DiGeo, to learn Geometry-aware features of inter-class separation and intra-class compactness. To guide the separation of feature clusters, we derive an offline simplex equiangular tight frame (ETF) classifier whose weights serve as class centers and are maximally and equally separated. To tighten the cluster for each class, we include adaptive class-specific margins into the classification loss and encourage the features close to the class centers. Experimental studies on two few-shot benchmark datasets (VOC, COCO) and one long-tail dataset (LVIS) demonstrate that, with a single model, our method can effectively improve generalization on novel classes without hurting the detection of base classes.

Supervised Masked Knowledge Distillation for Few-Shot Transformers Han Lin Columbia University , Guangxing Han Columbia University , Jiawei Ma Columbia University , Shiyuan Huang Columbia University , Xudong Lin Columbia University , Shih-Fu Chang Columbia University

Vision Transformers (ViTs) emerge to achieve impressive performance on many data-abundant computer vision tasks by capturing long-range dependencies among local features. However, under few-shot learning (FSL) settings on small datasets with only a few labeled data, ViT tends to overfit and suffers from severe performance degradation due to its absence of CNN-alike inductive bias. Previous works in FSL avoid such problem either through the help of self-supervised auxiliary losses, or through the dextile uses of label information under supervised settings. But the gap between self-supervised and supervised few-shot Transformers is still unfilled. Inspired by recent advances in self-supervised knowledge distillation and masked image modeling (MIM), we propose a novel Supervised Masked Knowledge Distillation model (SMKD) for few-shot Transformers which incorporates label information into self-distillation frameworks. Compared with previous self-supervised methods, we allow intra-class knowledge distillation on both class and patch tokens, and introduce the challenging task of masked patch tokens reconstruction across intra-class images. Experimental results on four few-shot classification benchmark datasets show that our method with simple design outperforms previous methods by a large margin and achieves a new start-of-the-art. Detailed ablation studies confirm the effectiveness of each component of our model. Code for this paper is available here: this https URL .

FLEX: Full-Body Grasping Without Full-Body Grasps Purva Tendulkar Columbia University , Dídac Surís Columbia University , Carl Vondrick Columbia University

Synthesizing 3D human avatars interacting realistically with a scene is an important problem with applications in AR/VR, video games and robotics. Towards this goal, we address the task of generating a virtual human — hands and full body — grasping everyday objects. Existing methods approach this problem by collecting a 3D dataset of humans interacting with objects and training on this data. However, 1) these methods do not generalize to different object positions and orientations, or to the presence of furniture in the scene, and 2) the diversity of their generated full-body poses is very limited. In this work, we address all the above challenges to generate realistic, diverse full-body grasps in everyday scenes without requiring any 3D full-body grasping data. Our key insight is to leverage the existence of both full-body pose and hand grasping priors, composing them using 3D geometrical constraints to obtain full-body grasps. We empirically validate that these constraints can generate a variety of feasible human grasps that are superior to baselines both quantitatively and qualitatively. See our webpage for more details: this https URL .

Humans As Light Bulbs: 3D Human Reconstruction From Thermal Reflection Ruoshi Liu Columbia University , Carl Vondrick Columbia University

The relatively hot temperature of the human body causes people to turn into long-wave infrared light sources. Since this emitted light has a larger wavelength than visible light, many surfaces in typical scenes act as infrared mirrors with strong specular reflections. We exploit the thermal reflections of a person onto objects in order to locate their position and reconstruct their pose, even if they are not visible to a normal camera. We propose an analysis-by-synthesis framework that jointly models the objects, people, and their thermal reflections, which combines generative models with differentiable rendering of reflections. Quantitative and qualitative experiments show our approach works in highly challenging cases, such as with curved mirrors or when the person is completely unseen by a normal camera.

Tracking Through Containers and Occluders in the Wild Basile Van Hoorick Columbia University , Pavel Tokmakov Toyota Research Institute , Simon Stent Woven Planet , Jie Li Toyota Research Institute , Carl Vondrick Columbia University

Tracking objects with persistence in cluttered and dynamic environments remains a difficult challenge for computer vision systems. In this paper, we introduce TCOW, a new benchmark and model for visual tracking through heavy occlusion and containment. We set up a task where the goal is to, given a video sequence, segment both the projected extent of the target object, as well as the surrounding container or occluder whenever one exists. To study this task, we create a mixture of synthetic and annotated real datasets to support both supervised learning and structured evaluation of model performance under various forms of task variation, such as moving or nested containment. We evaluate two recent transformer-based video models and find that while they can be surprisingly capable of tracking targets under certain settings of task variation, there remains a considerable performance gap before we can claim a tracking model to have acquired a true notion of object permanence.

Doubly Right Object Recognition: A Why Prompt for Visual Rationales Chengzhi Mao Columbia University , Revant Teotia Columbia University , Amrutha Sundar Columbia University , Sachit Menon Columbia University , Junfeng Yang Columbia University , Xin Wang Microsoft Research , Carl Vondrick Columbia University

Many visual recognition models are evaluated only on their classification accuracy, a metric for which they obtain strong performance. In this paper, we investigate whether computer vision models can also provide correct rationales for their predictions. We propose a “doubly right” object recognition benchmark, where the metric requires the model to simultaneously produce both the right labels as well as the right rationales. We find that state-of-the-art visual models, such as CLIP, often provide incorrect rationales for their categorical predictions. However, by transferring the rationales from language models into visual representations through a tailored dataset, we show that we can learn a “why prompt,” which adapts large visual representations to produce correct rationales. Visualizations and empirical experiments show that our prompts significantly improve performance on doubly right object recognition, in addition to zero-shot transfer to unseen tasks and datasets.

What You Can Reconstruct From a Shadow Ruoshi Liu Columbia University , Sachit Menon Columbia University , Chengzhi Mao Columbia University , Dennis Park Toyota Research Institute , Simon Stent Woven Planet , Carl Vondrick Columbia University

3D reconstruction is a fundamental problem in computer vision, and the task is especially challenging when the object to reconstruct is partially or fully occluded. We introduce a method that uses the shadows cast by an unobserved object in order to infer the possible 3D volumes under occlusion. We create a differentiable image formation model that allows us to jointly infer the 3D shape of an object, its pose, and the position of a light source. Since the approach is end-to-end differentiable, we are able to integrate learned priors of object geometry in order to generate realistic 3D shapes of different object categories. Experiments and visualizations show that the method is able to generate multiple possible solutions that are consistent with the observation of the shadow. Our approach works even when the position of the light source and object pose are both unknown. Our approach is also robust to real-world images where ground-truth shadow mask is unknown.

CLIP-Sculptor: Zero-Shot Generation of High-Fidelity and Diverse Shapes From Natural Language Aditya Sanghi Autodesk Research , Rao Fu Brown University , Vivian Liu Columbia University , Karl D.D. Willis Autodesk Research , Hooman Shayani Autodesk Research , Amir H. Khasahmadi Autodesk Research , Srinath Sridhar Brown University , Daniel Ritchie Brown University

Recent works have demonstrated that natural language can be used to generate and edit 3D shapes. However, these methods generate shapes with limited fidelity and diversity. We introduce CLIP-Sculptor, a method to address these constraints by producing high-fidelity and diverse 3D shapes without the need for (text, shape) pairs during training. CLIP-Sculptor achieves this in a multi-resolution approach that first generates in a low-dimensional latent space and then upscales to a higher resolution for improved shape fidelity. For improved shape diversity, we use a discrete latent space which is modeled using a transformer conditioned on CLIP’s image-text embedding space. We also present a novel variant of classifier-free guidance, which improves the accuracy-diversity trade-off. Finally, we perform extensive experiments demonstrating that CLIP-Sculptor outperforms state-of-the-art baselines.

Find open faculty positions here .

Computer Science at Columbia University

Upcoming events, alumni chat ft. shahreen | mastercard technical product manager.

Wednesday 5:30 pm

Virtual Employer Informational Session: Snowflake

Friday 2:00 pm

Virtual Employer Informational Session: Ford Motor Credit Company

Friday 2:45 pm

Employer Info Session: Boehringer Ingelheim

Monday 3:00 pm

In the News

Press mentions, dean boyce's statement on amicus brief filed by president bollinger.

President Bollinger announced that Columbia University along with many other academic institutions (sixteen, including all Ivy League universities) filed an amicus brief in the U.S. District Court for the Eastern District of New York challenging the Executive Order regarding immigrants from seven designated countries and refugees. Among other things, the brief asserts that “safety and security concerns can be addressed in a manner that is consistent with the values America has always stood for, including the free flow of ideas and people across borders and the welcoming of immigrants to our universities.”

This recent action provides a moment for us to collectively reflect on our community within Columbia Engineering and the importance of our commitment to maintaining an open and welcoming community for all students, faculty, researchers and administrative staff. As a School of Engineering and Applied Science, we are fortunate to attract students and faculty from diverse backgrounds, from across the country, and from around the world. It is a great benefit to be able to gather engineers and scientists of so many different perspectives and talents – all with a commitment to learning, a focus on pushing the frontiers of knowledge and discovery, and with a passion for translating our work to impact humanity.

I am proud of our community, and wish to take this opportunity to reinforce our collective commitment to maintaining an open and collegial environment. We are fortunate to have the privilege to learn from one another, and to study, work, and live together in such a dynamic and vibrant place as Columbia.

Mary C. Boyce Dean of Engineering Morris A. and Alma Schapiro Professor

Add Event to GMail

{{title}} {{fullname}}

research papers 2023

Courses This Semester

  • {{title}} ({{dept}} {{prefix}}{{course_num}}-{{section}})

research papers 2023

Ten Noteworthy AI Research Papers of 2023

research papers 2023

This year has felt distinctly different. I've been working in, on, and with machine learning and AI for over a decade, yet I can't recall a time when these fields were as popular and rapidly evolving as they have been this year.

To conclude an eventful 2023 in machine learning and AI research, I'm excited to share 10 noteworthy papers I've read this year. My personal focus has been more on large language models, so you'll find a heavier emphasis on large language model (LLM) papers than computer vision papers this year.

I resisted labeling this article "Top AI Research Papers of 2023" because determining the "best" paper is subjective. The selection criteria were based on a mix of papers I either particularly enjoyed or found impactful and worth noting. (The sorting order is a recommended reading order, not an ordering by perceived quality or impact.)

By the way, if you scroll down to the end of this article, you'll find a little surprise. Thanks for all your support, and I wish you a great start to the new year!

1) Pythia — Insights from Large-Scale Training Runs

With Pythia: A Suite for Analyzing Large Language Models Across Training and Scaling , the researchers originally released 8 LLMs ranging from 70M to 12B parameters (with both weights and data publicly released, which is rare).

But in my opinion, the standout feature of this paper is that they also released the training details, analyses, and insights (some of them shown in the annotated figure below). 

research papers 2023

Here are some questions that the Pythia paper addresses:

Does pretraining on duplicated data (i.e., training for >1 epoch) make a difference? It turns out that deduplication does not benefit or hurt performance.

Does training order influence memorization? Unfortunately, it turns out that it does not. "Unfortunately," because if this was true, we could mitigate undesirable verbatim memorization issues by reordering the training data.

Does pretrained term frequency influence task performance? Yes, few-shot accuracy tends to be higher for terms that occur more frequently.

Does increasing the batch size affect training efficiency and model convergence? Doubling the batch size halves the training time but doesn't hurt convergence.

Today, only six months later, the LLMs are by no means groundbreaking. However, I am including this paper because it not only tries to answer interesting questions about training settings but is also a positive example regarding details and transparency. Moreover, the small LLMs in the <1B range are nice templates for small studies and tinkering, or starters for pretraining experiments (here's a link to their GitHub repository ). 

My wish for 2024 is that we see more studies like this and well-written papers in the coming year!

2) Llama 2: Open Foundation and Fine-Tuned Chat Models

Llama 2: Open Foundation and Fine-Tuned Chat Models is the follow-up paper to Meta's popular first Llama paper. 

Llama 2 models, which range from 7B to 70B parameters, are one of the reasons this paper made it onto this list: these are still among the most capable and widely used openly available models. Worth noting is that the Llama 2 license also permits use in commercial applications (see the Request to Access page for details).

research papers 2023

On the model side, what differentiates the Llama 2 suite from many other LLMs is that the models come as standard pretrained models and chat models that have been finetuned via reinforcement learning with human feedback (RLHF, the method used to create ChatGPT) to follow human instructions similar to ChatGPT — RLHF-finetuned models are still rare.

research papers 2023

For more details on RLHF and how it's used in Llama 2, see my more comprehensive standalone article below.

LLM Training: RLHF and Its Alternatives

LLM Training: RLHF and Its Alternatives

Next to the fact that Llama 2 models are widely used and come with RLHF instruction-finetuned variants, the other reason I decided to include the paper on this list is the accompanying in-depth 77-page research report.

Here, the authors also nicely illustrated the evolution of the Llama 2 70B Chat models, tracing their journey from the initial supervised finetuning (SFT-v1) to the final RLHF finetuning stage with PPO (RLHF-v5). The chart reflects consistent improvements in both the harmlessness and helpfulness axes, as shown in the annotated plots below.

research papers 2023

Even though models such as Mistral-8x7B (more later), DeepSeek-67B, and YI-34B top the larger Llama-2-70B models in public benchmarks, Llama 2 still remains a common and popular choice when it comes to openly available LLMs and developing methods on top of it. 

Furthermore, even though some benchmarks indicate that there may be better models, one of the bigger challenges this year has been the trustworthiness of benchmarks. For instance, how do we know that the models haven't been trained on said benchmarks and the scores aren't inflated? In classic machine learning, when someone proposed a new gradient boosting model, it was relatively easy to reproduce the results and check. Nowadays, given how expensive and complex it is to train LLMs (and the fact that most researchers either don't disclose the architecture or the training data details), it is impossible to tell. 

To conclude, it's refreshing to see Meta doubling down on open source even though every other major company is now rolling out its own proprietary large language models (Google's Bard and Gemini, Amazon's Q, and Twitter/X's Grok, and OpenAI's ChatGPT). 

3) QLoRA: Efficient Finetuning of Quantized LLMs

QLoRA: Efficient Finetuning of Quantized LLMs has been one of the favorite techniques in the LLM research and finetuning community this year because it makes the already popular LoRA (low-rank adaptation) technique more memory efficient. In short, this means that you can fit larger models onto smaller GPUs.

research papers 2023

QLoRA stands for quantized LoRA (low-rank adaptation). The standard LoRA method modifies a pretrained LLM by adding low-rank matrices to the weights of the model's layers. These matrices are smaller and, therefore, require fewer resources to update during finetuning.

In QLoRA, these low-rank matrices are quantized, meaning their numerical precision is reduced. This is done by mapping the continuous range of values in these matrices to a limited set of discrete levels. This process reduces the model's memory footprint and computational demands, as operations on lower-precision numbers are less memory-intensive

research papers 2023

According to the QLoRA paper , QLoRA reduces the memory requirements of a 65B Llama model to fit onto a single 48 GB GPU (like an A100). The 65B Guanaco model, obtained from quantized 4-bit training of 65B Llama, maintains full 16-bit finetuning task performance, reaching 99.3% of the ChatGPT performance after only 24 hours of finetuning.

I've also run many QLoRA experiments this year and found QLoRA a handy tool for reducing GPU memory requirements during finetuning. There's a trade-off, though: the extra quantization step results in an additional computation overhead, meaning the training will be a bit slower than regular LoRA.

research papers 2023

LLM finetuning remains as relevant as ever as researchers and practitioners aim to create custom LLMs. And I appreciate techniques like QLoRA that help make this process more accessible by lowering the GPU memory-requirement barrier.

4) BloombergGPT: A Large Language Model for Finance

Looking at all the papers published this year, BloombergGPT: A Large Language Model for Finance may look like an odd choice for a top-10 list because it didn't result in a groundbreaking new insight, methodology, or open-source model. 

I include it because it's an interesting case study where someone pretrained a relatively large LLM on a domain-specific dataset. Moreover, the description was pretty thorough, which is becoming increasingly rare. This is especially true when it comes to papers with authors employed at companies -- one of the trends this year was that major companies are becoming increasingly secretive about architecture or dataset details to preserve trade secrets in this competitive landscape (PS: I don't fault them for that).

Also, BloombergGPT made me think of all the different ways we can pretrain and finetune models on domain-specific data, as summarized in the figure below (note that this was not explored in the BloombergGPT paper, but it would be interesting to see future studies on that).

research papers 2023

In short, BloombergGPT is a 50-billion parameter language model for finance, trained on 363 billion tokens from finance data and 345 billion tokens from a general, publicly available dataset. For comparison, GPT-3 is 3.5x larger (175 billion parameters) but was trained on 1.4x fewer tokens (499 billion).

Why did the authors use an architecture with "only" 50 billion parameters since GPT-3 is 3.5x larger? That's easier to answer. They adopted the Chinchilla scaling laws and found this to be a good size given the available size of the finance data.

Is it worth (pre)training the LLM on the combined dataset from scratch? Based on the paper, the model performs really well in the target domain. However, we don't know whether it's better than a) further pretraining a pretrained model on domain-specific data or b) finetuning a pretrained model on domain-specific data.

Despite the little criticism above, overall, this is an interesting paper that serves as an interesting case study and example for domain-specific LLMs; plus, it leaves room for further research on pretraining versus finetuning to instill knowledge into an LLM.

(PS: For those curious about a comparison to finetuning, as Rohan Paul shared with me, the "small" AdaptLLM-7B model outperforms BloombergGPT on one dataset and nearly matches its performance on three other finance datasets. Although BloombergGPT appears to be slightly better overall, it's worth noting that training AdaptLLM-7B cost about $100, in contrast to BloombergGPT's multi-million dollar investment.)

5) Direct Preference Optimization: Your Language Model is Secretly a Reward Model

Before discussing the Direct Preference Optimization: Your Language Model is Secretly a Reward Model paper, let's take a short step back and discuss the method it aims to replace, Reinforcement Learning from Human Feedback (RLHF).

RLHF is the main technique behind ChatGPT and Llama 2 Chat models. In RLHF, which I described in more detail in a separate article , we use a multi-step procedure:

Supervised finetuning: The model is initially trained on a dataset containing instructions and the desired responses.

Reward modeling: Human raters provide feedback on the model's outputs. This feedback is used to create a reward model, which learns to predict what kinds of outputs are to be preferred.

Proximal policy optimization (PPO): The model generates outputs, and the reward model scores each output. The PPO algorithm uses these scores to adjust the model's policy toward 

generating higher-quality outputs. (This is a reinforcement learning algorithm used to finetune the model's policy.

research papers 2023

While RLHF is popular and effective, as we've seen with ChatGPT and Llama 2, it's also pretty complex to implement and finicky. 

The Direct Preference Optimization (DPO) paper introduces an algorithm that optimizes language models to align with human preferences without explicit reward modeling or reinforcement learning. Instead, DPO uses a simple classification objective.

research papers 2023

In DPO, we still keep the supervised finetuning step (step 1 above), but we replace steps 2 and 3 with a single step to further finetune the model on the preference data. In other words, DPO skips the reward model creation required by RLHF entirely, which significantly simplifies the finetuning process.

How well does it work? There haven't been many models trained with DPO until very recently. (This makes sense because DPO is also a relatively recent method.) However, one recent example is the Zephyr 7B model described in Zephyr: Direct Distillation of LM Alignment . Zephyr-7B is based on a Mistral-7B base LLM that has been finetuned using DPO. (There will be more on Mistral later.)

As the performance tables below reveal, the 7B-parameter Zephyr model outperformed all other models in its size class at the time of its release. Even more impressively, Zephyr-7B even surpassed the 10 times larger 70B-parameter Llama 2 chat model on the conversational MT-Bench benchmark as well.

research papers 2023

In summary, the appeal of the DPO paper lies in the simplicity of its method. The scarcity of chat models trained using RLHF, with Llama 2 as a notable exception, can likely be attributed to the complexity of the RLHF approach. Given this, I think it's reasonable to anticipate an increase in the adoption of DPO models in the coming year.

6) Mistral 7B

I must admit that the Mistral 7B paper wasn't among my favorites due to its brevity. However, the model it proposed was quite impactful.

I decided to include the paper on this list because the Mistral 7B model was not only very popular upon release, but also served as the base model, leading to the development of two other notable models: Zephyr 7B and the latest Mistral Mixture of Experts (MoE) approach. These models are good examples of the trend I foresee for small LLMs in (at least) the early half of 2024.

Before we discuss the Zephyr 7B and Mistral MoE models, let's briefly talk about Mistral 7B itself.

In short, The Mistral 7B paper introduces a compact yet powerful language model that, despite its relatively modest size of 7 billion tokens, outperforms its larger counterparts, such as the 13B Llama 2 model, in various benchmarks. (Next to the two-times larger Qwen 14B , Mistral 7B was also the base model used in the winning solutions of this year's NeurIPS LLM Finetuning & Efficiency challenge .)

research papers 2023

Why exactly it is so good is unclear, but it might likely be due to its training data. Neither Llama 2 nor Mistral discloses the training data, so we can only speculate.

Architecture-wise, the model shares group-query attention with Llama 2. While being very similar to Llama 2, one interesting addition to the Mistral architecture is sliding window attention to save memory and improve computational throughput for faster training. (Sliding window attention was previously proposed in Child et al. 2019 and Beltagy et al. 2020 .)

The sliding window attention mechanism used in Mistral is essentially a fixed-sized attention block that allows a current token to attend only a specific number of previous tokens (instead of all previous tokens), which is illustrated in the figure below.

research papers 2023

In the specific case of 7B Mistral, the attention block size is 4096 tokens, and the researchers were training the model with up to 100k token context sizes. To provide a  concrete example, in regular self-attention, a model at the 50,000th token can attend all previous 49,999 tokens. In sliding window self-attention, the Mistral model can only attend tokens 45,904 to 50,000 (since 50,000 - 4,096 = 45,904). 

However, sliding window attention is mainly used to improve computational performance. The fact that Mistral outperforms larger Llama 2 models is likely not because of sliding window attention but rather despite sliding window attention.

Zephyr and Mixtral

One reason Mistral 7B is an influential model is that it served as the base model for Zephyr 7B, as mentioned earlier in the DPO section. Zephyr 7B, the first popular model trained with DPO to outperform other alternatives, has potentially set the stage for DPO to become the preferred method for finetuning chat models in the coming months.

Another noteworthy model derived from Mistral 7B is the recently released Mistral Mixture of Experts (MoE) model , also known as Mixtral-8x7B. This model matches or exceeds the performance of the larger Llama-2-70B on several public benchmarks.

research papers 2023

For more benchmarks, also see the official Mixtral blog post announcement . The team also released a Mixtral-8x7B-Instruct model that has been finetuned with DPO (but as of this writing there are no benchmarks comparing it to Llama-2-70-Chat, the RLHF-finetuned model).

research papers 2023

GPT-4 is also rumored to be an MoE consisting of 16 submodules. Each of these 16 submodules is rumored to have 111 billion parameters (for reference, GPT-3 has 175 billion parameters). If you read my AI and Open Source in 2023 article approximately two months ago, I mentioned that "It will be interesting to see if MoE approaches can lift open-source models to new heights in 2024". It looks like Mixtral started this trend early, and I am sure that this is just the beginning.

Mixture of Experts 101

If you are new to MoE models, here's a short explanation.

research papers 2023

The figure above shows the architecture behind the Switch Transformer, which uses 1 expert per token with 4 experts in total. Mixtral-8x-7B, on the other hand, consists of 8 experts and uses 2 experts per token.

Why MoEs? Combined, the 8 experts in a 7B model like Mixtral are still ~56B parameters. Actually, it's less than 56B, because the MoE approach is only applied to the FFN (feed forward network, aka fully-connected) layers, not the self-attention weight matrices. So, it's likely closer to 40-50B parameters.

Note that the router reroutes the tokens such that only <14B parameters (2x <7B, instead of all <56B) are used at a time for the forward pass, so the training (and especially inference) will be faster compared to the traditional non-MoE approach.

If you want to learn more about MoEs, here's a reading list recommended by Sophia Yang : 

The Sparsely-Gated Mixture-of-Experts Layer (2017)

GShard: Scaling Giant Models with Conditional Computation and Automatic Sharding (2020)  

MegaBlocks: Efficient Sparse Training with Mixture-of-Experts (2022)  

Mixture-of-Experts Meets Instruction Tuning (2023)

Furthermore, if you are interested in trying MoE LLMs, also check out the OpenMoE repository, which implemented and shared MoE LLMs earlier this year.

Other Small but Competitive LLMs

Mistral 7B, Zephyr 7B, and Mixtral-8x7B are excellent examples of the progress made in 2023 with small yet capable models featuring openly available weights. Another notable model, a runner-up on my favorite papers list, is Microsoft's phi series.

The secret sauce of phi is training on high-quality data (referred to as “textbook quality data”) obtained by filtering web data.

Released in stages throughout 2023, the phi models include phi-1 (1.3B parameters), phi-1.5 (1.3B parameters), and phi-2 (2.7B parameters). The latter, released just two weeks ago, is already said to match or outperform Mistral 7B, despite being only half its size.

research papers 2023

For more information about the phi models, I recommend the following resources:

Textbooks Are All You Need -- the phi-1 paper

Textbooks Are All You Need II: phi-1.5 Technical Report

The Phi-2: The Surprising Power of Small Language Models announcement

7) Orca 2: Teaching Small Language Models How to Reason

Orca 2: Teaching Small Language Models How to Reason is a relatively new paper, and time will tell whether it has a lasting impact on how we train LLMs in the upcoming months or years. 

I decided to include it because it combines several concepts and ideas. 

One is the idea of distilling data from large, capable models such as GPT-4 to create a synthetic dataset to train small but capable LLMs. This idea was described in the Self-Instruct paper, which came out last year. Earlier this year, Alpaca (a Llama model finetuned on ChatGPT outputs) really popularized this approach.

How does this work? In a nutshell, it's a 4-step process:

Seed task pool with a set of human-written instructions (175 in this case) and sample instructions;

Use a pretrained LLM (like GPT-3) to determine the task category;

Given the new instruction, let a pretrained LLM generate the response;

Collect, prune, and filter the responses before adding them to the task pool.

research papers 2023

The other idea may not be surprising but worth highlighting: high-quality data is important for finetuning. For instance, the LIMA paper proposed a human-generated high-quality dataset consisting of only 1k training examples that can be used to finetuning to outperform the same model finetuned on 50k ChatGPT-generated responses.

research papers 2023

Unlike previous research that heavily relied on imitation learning to replicate outputs from larger models, Orca 2 aims to teach "small" (i.e., 7B and 13B) LLMs various reasoning techniques (like step-by-step reasoning, recall-then-generate, etc.) and to help them determine the most effective strategy for each task. This approach has led Orca 2 to outperform similar-sized models noticeably and even achieve results comparable to models 5-10 times larger.

research papers 2023

While we haven't seen any extensive studies on this, the Orca 2 approach might also be able to address the issue of using synthetic data that was highlighted in the The False Promise of Imitating Proprietary LLMs paper. Here, the researchers investigated the finetuning weaker language models to imitate stronger proprietary models like ChatGPT, using examples such as Alpaca and Self-Instruct. Initially, the imitation models showed promising results, performing well in following instructions and receiving competitive ratings from crowd workers compared to ChatGPT. However, more follow-up evaluations revealed that these imitation models only seemed to perform well to a human observer but often generated factually incorrect responses.

8) ConvNets Match Vision Transformers at Scale

In recent years, I've almost exclusively worked with large language transformers or vision transformers (ViTs) due to their good performance. 

Switching gears from language to computer vision papers for the last three entries, what I find particularly appealing about transformers for computer vision is that pretrained ViTs are even easier to finetune than convolutional neural networks. (I summarized a short hands-on talk at CVPR earlier this year here: https://magazine.sebastianraschka.com/p/accelerating-pytorch-model-training). 

To my surprise, I stumbled upon the ConvNets Match Vision Transformers at Scale paper showing that convolutional neural networks (CNNs) are in fact, competitive with ViTs when given access to large enough datasets.

research papers 2023

Here, researchers invested compute budgets of up to 110k TPU hours to do a fair comparison between ViTs and CNNs. The outcome was that when CNNs are pretrained with a compute budget similar to what is typically used for ViTs, they can match the performance of ViTs. For this, they pretrained on 4 billion labeled images from JFT and subsequently finetuned the models on ImageNet.

9) Segment Anything

Object recognition and segmentation in images and videos, along with classification and generative modeling, are the main research fields in computer vision. 

To briefly highlight the difference between these two tasks: object detection about predicting bounding boxes and the associated labels; segmentation classifies each pixel to distinguish between foreground and background objects. 

research papers 2023

Meta's Segment Anything paper is a notable milestone for open source and image segmentation research. The paper introduces a new task, model, and dataset for image segmentation. The accompanying image datasets the largest segmentation dataset to date with over 1 billion masks on 11 million images. 

research papers 2023

However, what's rare and especially laudable is that the researchers used licensed and privacy-respecting images, so the model can be open-sourced without major copyright concerns.

The Segment Anything Model (SAM) consists of three main components, as summarized in the annotated figure above.

research papers 2023

In slightly more details, the three components can be summarized as follows:

An image encoder utilizing a masked autoencoder based on a pretrained vision transformer (ViT) that can handle high-resolution inputs. This encoder is run once per image and can be applied before prompting the model.

A prompt encoder that handles two types of prompts: sparse (points, boxes, text) and dense (masks). Points and boxes are represented by positional encodings combined with learned embeddings for each prompt type. And free-form text uses an off-the-shelf text encoder from CLIP. Dense prompts, i.e., masks, are embedded using convolutions and summed element-wise with the image embedding.

A mask decoder maps the image embedding, prompt embeddings, and an output token to a mask. This is a decoder-style transformer architecture that computes the mask foreground probability at each image location.

Image segmentation is important for applications like self-driving cars, medical imaging, and many others. In the short amount of 6 months, the paper has already been cited more than 1500 times , and there have already been many projects that have been built on top of this paper.

10) Align your Latents: High-Resolution Video Synthesis with Latent Diffusion Models

Emu Video: Factorizing Text-to-Video Generation by Explicit Image Conditioning is another notable computer vision project from Meta's research division. 

Emu is a text-to-video model that can generate entire videos from text prompts. 

While it's not the first model for impressive text-to-video generation, it compares very favorably to previous works.

research papers 2023

As the authors note, the Emu architecture setup is relatively simple compared to previous approaches. One of the main ideas here is that Emu factorizes the generation process into two steps: first, generating an image based on text (using a diffusion model), then creating a video conditioned on both the text and the generated image (using another diffusion model). 

2022 has been a big year for text-to-image models like DALL-E 2, Stable Diffusion, and Midjourney. While text-to-image models remain very popular in 2023 (even though LLMs got most of the attention throughout the year), I think that text-to-video models are just about to become more prevalent in online communities in the upcoming year. 

Since I am not an image or video designer, I don't have use cases for these tools at the moment; however, text-to-image and text-to-video models are nonetheless interesting to watch as a general measure of progress regarding computer vision.

This magazine is personal passion project that does not offer direct compensation. However, for those who wish to support me, please consider purchasing a copy of one of my books . If you find them insightful and beneficial, please feel free to recommend them to your friends and colleagues.

research papers 2023

Your support means a great deal! Thank you!

research papers 2023

Ready for more?

  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer

TOPBOTS Logo

The Best of Applied Artificial Intelligence, Machine Learning, Automation, Bots, Chatbots

The GenAI Frontier: 10 Transformative LLM Research Papers of 2023 from LLaMA to GPT-4

December 5, 2023 by Mariya Yao

language models 2023

Generated with DALL-E 3

In the rapidly evolving landscape of Natural Language Processing, 2023 emerged as a pivotal year, witnessing groundbreaking research in the realm of Large Language Models (LLMs). These LLMs, characterized by their vast parameter sizes and impressive capabilities, played a central role in shaping the future of AI applications. This introduction provides a glimpse into the transformative research that unfolded in the field, where language models have been refined, scaled down, and even integrated with external tools to tackle a diverse range of tasks. 

If you’d like to skip around, here are the research papers we featured:

  • LLaMA by Meta AI
  • LLaMA 2 by Meta AI
  • GPT-4 by OpenAI
  • Sparks of AGI by Microsoft
  • BLIP-2 by Salesforce
  • InstructBLIP by Salesforce
  • PALM-E by Google
  • PALM-2 by Google
  • Toolformer by Meta AI
  • Tree of Thoughts by Princeton University and Google DeepMind

If such research summaries are useful for you, subscribe to our AI mailing list to be alerted when we release new material. 

Top LLM Research Papers 2023

1. llama by meta ai, summary .

The Meta AI team asserts that smaller models trained on more tokens are easier to retrain and fine-tune for specific product applications. Therefore, they introduced LLaMA ( L arge La nguage M odel Meta A I), a collection of foundational language models with 7B to 65B parameters. LLaMA 33B and 65B were trained on 1.4 trillion tokens, while the smallest model, LLaMA 7B, was trained on one trillion tokens. They exclusively used publicly available datasets, without depending on proprietary or restricted data. The team also implemented key architectural enhancements and training speed optimization techniques. Consequently, LLaMA-13B outperformed GPT-3, being over 10 times smaller, and LLaMA-65B exhibited competitive performance with PaLM-540B.

Where to learn more about this research?

  • LLaMA: Open and Efficient Foundation Language Models (research paper)
  • Introducing LLaMA: A foundational, 65-billion-parameter large language model (blog post)

Where can you get implementation code?

  • The code implementation of the original LLaMA-1 model is available here on GitHub .

2nd Edition Applied AI book

2. LLaMA 2 by Meta AI

LLaMA 2 is an enhanced version of its predecessor, trained on a new data mix, featuring a 40% larger pretraining corpus, doubled context length, and grouped-query attention. The LLaMA 2 series of models includes LLaMA 2 and LLaMA 2-Chat , optimized for dialogue, with sizes ranging from 7 to 70 billion parameters. These models exhibit superior performance in helpfulness and safety benchmarks compared to open-source counterparts and are comparable to some closed-source models. The development process involved rigorous safety measures, including safety-specific data annotation and red-teaming. The paper aims to contribute to the responsible development of LLMs by providing detailed descriptions of fine-tuning methodologies and safety improvements.

  • Llama 2: Open Foundation and Fine-Tuned Chat Models (research paper)
  • Llama 2: open source, free for research and commercial use (blog post)
  • Meta AI released LLaMA 2 models to individuals, creators, researchers, and businesses of all sizes. You can access model weights and starting code for pretrained and fine-tuned LLaMA 2 language models through GitHub .

3. GPT-4 by OpenAI

GPT-4 is a large-scale, multimodal model that accepts image and text inputs and generates text outputs. Due to competitive and safety concerns, specific details about the model’s architecture and training are withheld. In terms of performance, GPT-4 surpasses previous language models on traditional benchmarks and shows significant improvements in user intent understanding and safety properties. The model also achieves human-level performance on various exams, including a top 10% score on a simulated Uniform Bar Examination.

GPT-4 results

  • GPT-4 Technical Report (research paper)
  • GPT-4 (blog post)
  • Code implementation of GPT-4 is not available.

4. Sparks of AGI by Microsoft

In this research paper, a team from Microsoft Research analyzes an early version of OpenAI’s GPT-4, which was still under active development at the time. The team argues that GPT-4 represents a new class of large language models, exhibiting more generalized intelligence compared to previous AI models. Their investigation reveals GPT-4’s expansive capabilities across various domains, including mathematics, coding, vision, medicine, law, and psychology. They highlight that GPT-4 can solve complex and novel tasks without specialized prompting, often achieving performance close to human level. 

The Microsoft team also emphasizes the potential of GPT-4 to be considered an early, albeit incomplete, form of artificial general intelligence (AGI). They focus on identifying GPT-4’s limitations and discuss the challenges in progressing towards more advanced and comprehensive AGI versions. This includes considering new paradigms beyond the current next-word prediction model.

sparks of AGI

  • Sparks of Artificial General Intelligence: Early experiments with GPT-4 (research paper)
  • Sparks of AGI: early experiments with GPT-4 (a talk by the paper’s first author Sébastien Bubeck)
  • Not applicable

5. BLIP-2 by Salesforce

BLIP-2 is an efficient and generic pre-training framework for vision-and-language models, designed to circumvent the increasingly prohibitive cost of pre-training large-scale models. BLIP-2 leverages off-the-shelf frozen pre-trained image encoders and frozen large language models to bootstrap vision-language pre-training, incorporating a lightweight Querying Transformer pre-trained in two stages. The first stage initiates vision-language representation learning from a frozen image encoder, and the second stage propels vision-to-language generative learning from a frozen language model. 

Despite having significantly fewer trainable parameters, BLIP-2 outperforms state-of-the-art methods, surpassing DeepMind’s Flamingo80B by 8.7% on zero-shot VQAv2 with 54x fewer trainable parameters. The model also exhibits promising zero-shot image-to-text generation capabilities following natural language instructions.

BLIP-2

  • BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models (research paper)
  • BLIP-2: Scalable Pre-training of Multimodal Foundation Models for the World’s First Open-source Multimodal Chatbot (blog post)
  • The official BLIP-2 implementation is available here on GitHub .

6. InstructBLIP by Salesforce

InstructBLIP is a novel framework for vision-language instruction tuning, enabling general-purpose models to process a wide range of visual tasks using natural language instructions. This study builds on the pre-trained BLIP-2 model, incorporating an image encoder, a large language model, and a Querying Transformer (Q-Former) to integrate the two. The instruction tuning involves fine-tuning the Q-Former while keeping the image encoder and LLM frozen. For comprehensive study and evaluation, the researchers transformed 26 datasets into instruction tuning format, using 13 datasets for instruction tuning and 13 for zero-shot evaluation. A key innovation is the instruction-aware visual feature extraction, allowing the model to extract relevant features based on given instructions. 

InstructBLIP models demonstrate state-of-the-art zero-shot performance across various vision-language tasks, significantly outperforming BLIP-2 and larger Flamingo models, as well as leading to state-of-the-art performance, when finetuned on individual downstream tasks (e.g., 90.7% accuracy on ScienceQA questions with image contexts).

InstructBLIP

  • InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning (research paper)
  • The official InstructBLIP implementation is also available on GitHub .

7. PALM-E by Google

The research paper introduces PaLM-E , a novel approach to language models that bridges the gap between words and percepts in the real world by directly incorporating continuous sensor inputs. This embodied language model seamlessly integrates multi-modal sentences containing visual, continuous state estimation, and textual information. These inputs are trained end-to-end with a pre-trained LLM and applied to various embodied tasks, including sequential robotic manipulation planning, visual question answering, and captioning.

PaLM-E, particularly the largest model with 562B parameters, demonstrates remarkable performance on a wide range of tasks and modalities. Notably, it excels in embodied reasoning tasks, exhibits positive transfer from joint training across language, vision, and visual-language domains, and showcases state-of-the-art capabilities in OK-VQA benchmarking. Despite its focus on embodied reasoning, PaLM-E-562B also exhibits an array of capabilities, including zero-shot multimodal chain-of-thought reasoning, few-shot prompting, OCR-free math reasoning, and multi-image reasoning, despite being trained on only single-image examples.

PALM-E model

  • PaLM-E: An Embodied Multimodal Language Model (research paper)
  • PaLM-E (demos)
  • PaLM-E (blog post)
  • Code implementation of the PaLM-E model is not available.

8. PALM-2 by Google

In May 2023, the Google team introduced PaLM 2 , a successor to the original PaLM that exhibits enhanced multilingual capabilities, better reasoning skills, and greater computational efficiency. PaLM 2, based on a Transformer architecture, is trained using a mix of objectives and has been extensively evaluated on tasks involving English and other languages, as well as reasoning challenges. 

The findings show that PaLM 2 significantly outperforms its predecessor in terms of task performance across various model sizes, while also achieving faster and more efficient inference. PaLM 2’s robust reasoning abilities are highlighted by substantial improvements over the original PaLM in BIG-Bench and other reasoning tasks. The model also maintains stable performance in responsible AI evaluations and offers inference-time control over toxicity without compromising other capabilities or incurring extra overhead.

  • PaLM 2 Technical Report (research paper)
  • Introducing PaLM-2 (blog post)
  • PaLM-2 (overview)
  • Code implementation of the PaLM-2 model is not available.

9. Toolformer by Meta AI

The research paper introduces Toolformer , a novel approach to enhance the capabilities of large language models (LMs) by enabling them to utilize external tools through simple APIs. While LMs excel at solving new tasks from limited examples or textual instructions, they often struggle with basic functions like arithmetic or factual lookup, where smaller models perform better. Toolformer bridges this gap by teaching LMs to autonomously determine which APIs to invoke, when to call them, what arguments to provide, and how to integrate the results into future token predictions. This learning process is self-supervised and requires only a small number of demonstrations for each API. Toolformer, based on a pretrained GPT-J with 6.7 billion parameters, significantly improves zero-shot performance across various downstream tasks, outperforming a much larger GPT-3 model and other baselines.

Toolformer

  • Toolformer: Language Models Can Teach Themselves to Use Tools (research paper)
  • The open-source implementation of Toolformer is available on GitHub .

10. Tree of Thoughts by Princeton University and Google DeepMind

The research paper introduces a groundbreaking framework for language model inference called Tree of Thoughts (ToT) . LLMs have proven adept at solving tasks but are limited to token-level, left-to-right decision-making during inference. This hinders their performance in tasks requiring exploration, strategic lookahead, or pivotal initial decisions. ToT builds upon the Chain of Thought approach to prompting LLMs and enables exploration over coherent units of text called “thoughts.” These thoughts serve as intermediate steps in problem-solving, empowering LLMs to make deliberate decisions by considering multiple reasoning paths, self-evaluating choices, and making global decisions by looking ahead or backtracking when needed. The inspiration for ToT comes from “dual process” models in human decision-making, where fast, automatic decisions (System 1) are complemented by slower, deliberate decisions (System 2).

Empirical experiments demonstrate ToT’s effectiveness on challenging tasks such as Game of 24, Creative Writing, and Crosswords. As an example, in the Game of 24, where GPT-4 using chain-of-thought prompting managed to solve only 4% of the tasks, this approach achieved a remarkable success rate of 74%.

  • Tree of Thoughts: Deliberate Problem Solving with Large Language Models (research paper)
  • Twitter thread by the paper’s first author Shunyu Yao
  • The official code implementation of the paper is available on GitHub .

As researchers continue to push the boundaries of what LLMs can achieve, the future of AI applications looks increasingly promising, offering solutions to complex challenges and enhancing human-AI collaboration. The journey of innovation in LLMs is far from over, and the world eagerly awaits the next wave of breakthroughs in the ever-expanding field of artificial intelligence.

Enjoy this article? Sign up for more AI research updates.

We’ll let you know when we release more summary articles like this one.

  • Email Address *
  • Name * First Last
  • Natural Language Processing (NLP)
  • Chatbots & Conversational AI
  • Computer Vision
  • Ethics & Safety
  • Machine Learning
  • Deep Learning
  • Reinforcement Learning
  • Generative Models
  • Other (Please Describe Below)
  • What is your biggest challenge with AI research? *

' src=

About Mariya Yao

Mariya is the co-author of Applied AI: A Handbook For Business Leaders and former CTO at Metamaven. She "translates" arcane technical concepts into actionable business advice for executives and designs lovable products people actually want to use. Follow her on Twitter at @thinkmariya to raise your AI IQ.

About TOPBOTS

  • Expert Contributors
  • Terms of Service & Privacy Policy
  • Contact TOPBOTS

Enago Academy

2023 in Review: Important updates in research and scholarly publishing

' src=

As we bid adieu to 2023, an eventful year filled with monumental shifts in the world of scholarly publishing, we are thrilled to take you on the journey through the most notable events, milestones, and challenges faced by the scientific community.

2023 was undoubtedly defined by the concerted actions against potential threats of Artificial Intelligence (AI) and instances of scientific misconduct, to fortify the bedrock of academic integrity. As we steer into 2024, it’s essential to look back at the significant happenings in the industry and grasp their implications for the future.

2023 was the year of:

Table of Contents

1. Upholding Academic Integrity

A. web of science delists over 50 journals.

Although 2023 witnessed an unfortunate surge in retractions and an increase in scientific misconduct, the community ensured resistance against it. The reasons for these retractions included data-manipulation and a compromised peer review process, among others. Furthermore, in March, Web of Science took a bold step by delisting over 50 flagged journals, signaling its commitment to maintaining a high-quality scholarly record.

b. Clarivate Releases its 2023 List of Highly Cited Researchers

Clarivate unveiled its 2023 list of 7,125 Highly Cited Researchers™ across various fields, recognizing 6,849 influential individuals globally. The exclusion of over 1000 researchers from its final list came as a shock to the community.

c. Study Reports Over 67 Hijacked Journals in Scopus

A study published in November by Anna Abalkina revealed 67 hijacked journals in Scopus, prompting concerns within the community. Although some journals included legitimate work, instances of plagiarism, fabrication, or publication without peer review were reported. The study called for coordinated action within the scholarly publishing ecosystem to counteract hijacked journals.

d. Enago Academy Releases the Research Ethics Survey

In an effort to address the root causes of misconduct, Enago Academy released a Research Ethics Survey in November, to promote researcher awareness and capture their understanding of research ethics and compliance. The survey is closed and a comprehensive report highlighting the survey findings will be shared soon.

e. Number of Retractions in 2023 Crosses 10,000 Papers

The year ended with a shocking break in record as over 10,000 papers were retracted. This invited concerns over the worsening dynamics of research ethics and compliance. This also emphasized the collective responsibility to uphold the highest standards of research.

2. Pivoting New Peer Review Systems for a Sustainable Future of Publishing

A. niso introduces new peer review terminologies.

One of the pivotal developments in 2023 was the introduction of new peer review terminologies and standards, aimed at improving transparency in the process. The American National Standards Institute (ANSI) and the National Information Standards Organization (NISO) jointly released new standard terminologies for peer reviewers in July 2023 to provide clear standard definitions and recommendations for communication in the peer review process. This initiative, although initially designed for the peer review of journal articles to ensure a more accountable system, intends to expand it into the other sources of scholarly knowledge dissemination.

b. Enago Participates in the Peer Review Week 2023

In a proactive effort to strengthen the future of the peer review process, Enago actively engaged with the global audience by participating in Peer Review Week , organized in September 2023. Acknowledging the concerns posed by AI on integrity and the future of the peer review process, we released a set of diverse resources. These resources, aimed at assisting researchers and peer-reviewers to navigate the future of publishing, included infographics , checklists , panel discussion , comprehensive resources on the latest ethical standards , and some insightful articles . This initiative not only provided a platform to share insights but also positioned us as a key contributor to the ongoing evolution of the peer review process.

3. Empowering Community Over Commercialization

A. enago academy launches open platform for researchers to increase their visibility.

In a significant movement toward open access and community engagement, Enago Academy launched Open Platform in April. This initiative aims to transcend the traditional boundaries in academic engagement and promote academic blogging, allowing researchers and academic writers to share their voices with a global audience of over 1000K+ readers. It’s a platform where a community is getting built—with more than 1500 registrants already. Open Platform beckons students, researchers and scholars worldwide to level up their visibility in scholarly space and connect with like-minded people, and collaborate with them.

b. Plan S Cuts Off 68% of the Transformative Journals

In June 2023, Plan S announced the cut off of 68% of the “transformative journals” eligible for funding. 77% of participating journals from Springer Nature, 63% from Elsevier, and 56% from the American Chemical Society (ACS) failed to meet their open access targets in 2022. Coalition S plans to end support for these transformative journals, stopping their funding from January 1, 2024.

c. ACS and Elsevier Announce New Policies for Open Access Publishing

ACS faced criticism for its introduction of an Article Development Charge (ADC) in October, where the authors are expected to transfer their rights to ACS for free but can buy back some rights for $2,500. ACS implemented ADC to empower authors and meet funder requirements for immediate green open access without embargo. The ADC covers all publishing service costs, including editorial and review activities, data verification, and author support, while allowing authors to share their accepted manuscripts immediately with a CC BY license.

In the same month, Elsevier announced the commencement of Geographical Pricing for Open Access (GPOA) pilot across 142 of its Gold Open Access journals in January 2024. The GPOA model is an industry first, and aims to make open access article publishing charges (APCs) more affordable for authors in low- and middle-income countries, after tailoring the pricing structure to countries’ local economic conditions and average income, based on Gross National Income (GNI) per capita. This initiative by Elsevier reflects a positive step toward a fairer and globally equitable pricing structure for scholarly publishing.

These innovative OA funding models demonstrate a dynamic evolution in OA publishing. While vigilance is necessary to ensure genuine benefits for the global research community, these initiatives reflect publishers’ efforts to adapt to evolving scholarly demands and enhance accessibility to quality research.

d. Enago Academy Advocates Open Access Globally

As the year progressed, Enago Academy actively participated in the Open Access Week held from October 23 to October 29 to promote meaningful discussions around open science . With an emphasis on open science practices and prioritizing the best interests of the academic community, we released a treasure of resources aimed at raising awareness and providing actionable steps to highlight the vital role of community control in knowledge-sharing systems. Furthermore, our checklist on securing open access funding sources was added in the guidelines of the journal ‘ eCancer ’.

e. cOAlition S Announces a New Proposal after PlanS

While commercial entities played their roles in contributing to open science, in November, cOAlition S introduced another bold proposal advocating for the open publishing of all versions of an article, along with associated peer-review reports, without charging authors. This shift aimed to empower authors in publishing decisions; thereby fostering a community-based and scholar-led open-research communication system. However, the proposal is currently seeking feedback from the global research community, and distinguishes itself from Plan S by encouraging discussion rather than imposing strict mandates.

f. Enago Academy Releases the Open Access Survey Report

Enago Academy released the Open Access Survey Report based on the findings of the 5th global Research Risk Assessment survey that focused on assessing perceptions and attitudes towards open-access (OA) publishing and the “Pay to Publish” model. The survey reveals a persistent lack of awareness about open science among researchers, mainly due to perceived financial burdens. The report also highlighted concerns regarding the increasing cost of Article Processing Charges (APCs) , including potential geographical inequalities, a shift towards subscription-based publishing, and authors opting for lower-quality, predatory journals with lower APCs. The findings point to a gap in researcher awareness about funding for APCs and best practices for accessing such grants.

4. Traversing the AI Domain

A. icml bans its authors from using ai tools.

The growing reliance on AI by researchers in conducting research and writing academic content left the community and the stake-holders unsettled. In January, the International Conference on Machine Learning (ICML) sparked a debate by banning authors from using generative AI tools like ChatGPT to write scientific papers.

b. Trinka AI Launches AI Content Detector and Trinka Paraphraser

After instances of journals banning researchers from using AI tools in manuscript writing were reported, Trinka AI introduced AI Content Detector to analyze written content for AI-generated text and elevate research integrity. Additionally, Trinka AI also introduced Trinka Paraphraser to enable users paraphrase their content, ensuring originality and clarity in their written work.

c. Enago Studied the AI Perspectives of Researchers in its Global Survey

Enago released a survey to understand researchers’ awareness of AI-driven technologies among researchers and to understand the future of automation in publishing through objective and impartial evaluation of data in research. The survey revealed that approximately 39% of participants supported the integration of AI-powered bots in proofreading their manuscript and 40% participants consider cross-media intelligence extremely valuable. Furthermore, 77.31% of the total participants preferred using GPT-4 among other AI-based writing tools. However, this not only raises concern over the trustworthiness of the AI-generated content, but also calls for the promotion of education programs that train for the responsible use of AI in research.

d. Science and AAAS Revokes the Ban on AI-assistance in Manuscript Writing

Although several journals initially banned researchers from using AI tools in research and writing sections of research papers, Science reportedly reversed its ban on ChatGPT in November. This decision allows authors to include content written by AI tools in their submitted papers. However, it prohibits the use of AI-generated images and other multimedia without explicit permission from the editors.

Also, the American Association for the Advancement of Science (AAAS) publishing arm stated that authors can use AI-assisted technologies as long as it’s noted in cover letters and acknowledgements, with detailed information provided in the methods section. Furthermore, authors are required to submit all the prompts used in their work and they would remain accountable for the accuracy of the AI-generated content, avoiding potential bias and plagiarism. The new guidelines reflect changes in editorial policies by organizations. However, the use of AI in the reviewing process is still not permitted due to concerns about potential breaches of manuscript confidentiality.

e. Enago Launches Enago Reports and Copilot

In November, Enago launched Enago Reports , a suite of AI-powered reports designed  for a quick assessment of various documents. Joining Enago’s suite of AI-powered products alongside Trinka AI, Enago Read (formerly known as RAxter.io), offers seven distinct reports covering Language Quality, Inclusive Language, File Proofreader, Technical Check, Reference Quality, Journal Finder , and Plagiarism Check . Additionally, we launched Copilot to help researchers streamline their research process and optimize productivity.

5. Promoting Diversity and Inclusion Within the Academic Community

A. study reports barriers faced by historically excluded groups in peer review.

A study has shed light on the persistent disparities faced by scientists in the peer review process, revealing that certain authors, particularly those from historically excluded groups, experience worse outcomes. The study also made shocking revelation about the lower acceptance rates for authors from Asia, non-English-speaking countries, and those from socially and economically less developed nations.

b. Enago Addresses Diversity and Inclusion With Blogs

Recognizing the persistent barriers faced by historically excluded groups in peer review, Enago took a proactive stance in promoting diversity and inclusion within the academic community. We are contributing to raise awareness on diversity, equity, and inclusion (DEI) by publishing articles that emphasized the importance of creating an inclusive research environment.

6. Educating the Academic Community

A. enago participates in external events.

Enago actively participated in educating the academic community by participating in conferences hosted by SSP and Unconference, and submitted articles to Upstream and EASE. These engagements provided a platform for sharing insights and contributing to the wider academic discourse.

b. Enago Academy Initiates Thought Leadership Pieces and News Updates

As a further step to advancing knowledge, Enago Academy took the lead in initiating thought leadership pieces to engage with the community through thought-provoking pieces and regular updates. This endeavor aimed to keep the academic community informed about the latest developments and best practices.

The transformative events and initiatives of 2023 reverberated to shape the future of academic publishing , and our active involvement in these endeavors reflects a commitment to fostering positive change on a worldwide scale. Here are some other happenings in the industry that captured attention in 2023.

Other Key Highlights

A. nsf released golden ticket for research grants.

The US National Science Foundation (NSF) was considering a “Golden Ticket” pilot program in January. This allows individual reviewers to fund proposals they find promising. The proposed pilot is part of a broader trend among scientific funding agencies to experiment with less conservative and bureaucratic grant-making, fostering higher-risk, higher-reward proposals. The NSF hopes to avoid a slowdown in scientific and technological breakthroughs by promoting new ways to support research. The NSF plans to trial the idea on close-to-market research, with the focus on observable short-term success or failure. However, the specific programs for the pilot are yet to be determined.

b. CrossRef Acquired Retraction Watch

In strategic move, CrossRef acquired Retraction Watch, making it a publicly accessible resource. The collaboration aimed to streamline the tracking of retractions, by creating the largest open-source database of retractions. However, retraction watch will continue its journalistic work independently of investigating retractions and related issues, while CrossRef will focus on facilitating access to reliable retraction data. The initiative simplifies the struggles faced by the publishers and readers to identify retracted work.

c. Enago Academy in the Top 25 Academic Blog and Websites

Enago Academy gets listed in the top 25 blogging sites listed by FeedSpot based on criteria such as content quality, relevance, engagement, and popularity. Being listed in the top 25 signifies the acknowledgement of Enago Academy as one of the leading blogging sites in its niche. Furthermore, this powers our commitment to providing high-quality content and reliable resources to our audience, and enhancing our credibility within the blogging community. For readers of Enago Academy, this recognition serves as an assurance of the reliability and authority of our resources.

d. Clarivate Added Preprint Citation Index to the Web of Science

Clarivate Plc introduced the Preprint Citation Index to its Web of Science platform allowing researchers to discover and link to preprint repositories. The index currently includes nearly two million preprints, and additional repositories will be added throughout 2023. Preprints are versions of research papers made publicly available before peer review. The new feature enables researchers to include preprints in their workflows, offering immediate access to up-to-date content. The Preprint Citation Index is designed to facilitate the connection between preprints and final versions of records, expanding the view of a researcher’s expertise. The addition of preprints is expected to speed up the research process and enhance scientific progress.

e. The United States Witnessed a Decline in Highly Cited Researchers

The 2023 list of Highly Cited Researchers™ revealed a gradual decline from 43.3% in 2018 to 37.5% in 2023. However, Mainland China demonstrated a noteworthy increase from 7.9% in 2018 to 17.9% in 2023; thereby securing the second position. Also, the Chinese Academy of Sciences tops the list among institutions, reflecting a dynamic global shift in scientific contributions. The announcement underscores the commitment to maintaining stringent standards and transparent selection criteria in the evolving landscape of scholarly contributions.

From technological innovations to changing philosophies, the industry is undergoing a remarkable shift. These trends are not only shaping the present but also lay the foundation for a more robust, inclusive, and efficient future of research. As we end this year, the scientific community is poised to witness a renaissance in knowledge dissemination.

As the curtain drops on 2023, we extend our heartfelt gratitude to our readers and the scholarly community. Your unwavering support has been instrumental in encouraging us to stay committed in providing perspectives that shape the publishing landscape. As we eagerly anticipate the unfolding of 2024, we remain committed to contributing to the evolution of the academic publishing landscape. Join us, as we move to another year of progress by collaborating and growing on your scholarly journey with us.

Happy Learning in 2024 with Enago!

Rate this article Cancel Reply

Your email address will not be published.

research papers 2023

Enago Academy's Most Popular Articles

research papers 2023

  • Industry News

EU AI Act: Charting a new course for AI governance and research

In a landmark move, the European Union (EU) is set to introduce the world’s first…

Journals Combat Image Manipulation with AI

Science under Surveillance: Journals adopt advanced AI to uncover image manipulation

Journals are increasingly turning to cutting-edge AI tools to uncover deceitful images published in manuscripts.…

Clarivate launches Web of Science Grants Index for researchers

Optimizing Funding Strategies: Clarivate unveils Web of Science Grants Index for researchers

Clarivate Plc, a global leader in providing information services, has recently launched the Web of…

China's Ministry of Education Spearheads Efforts to Uphold Academic Integrity

China’s Ministry of Education Spearheads Efforts to Uphold Academic Integrity

In response to the increase in retractions of research papers submitted by Chinese scholars to…

research papers 2023

COPE Forum Discussion Highlights Challenges and Urges Clarity in Institutional Authorship Standards

The COPE forum discussion held in December 2023 initiated with a fundamental question — is…

Optimizing Funding Strategies: Clarivate unveils Web of Science Grants Index for…

research papers 2023

Sign-up to read more

Subscribe for free to get unrestricted access to all our resources on research writing and academic publishing including:

  • 2000+ blog articles
  • 50+ Webinars
  • 10+ Expert podcasts
  • 50+ Infographics
  • 10+ Checklists
  • Research Guides

We hate spam too. We promise to protect your privacy and never spam you.

I am looking for Editing/ Proofreading services for my manuscript Tentative date of next journal submission:

research papers 2023

What should universities' stance be on AI tools in research and academic writing?

Google DeepMind’s latest research at ICML 2023

  • Copy link ×

A 3D image of a landscape of tiny circular and triangular teal beads, some of which are elevated like bridges or buildings.

Exploring AI safety, adaptability, and efficiency for the real world

Next week marks the start of the 40th International Conference on Machine Learning (ICML 2023), taking place 23-29 July in Honolulu, Hawai'i.

ICML brings together the artificial intelligence (AI) community to share new ideas, tools, and datasets, and make connections to advance the field. From computer vision to robotics, researchers from around the world will be presenting their latest advances.

Our director for science, technology & society, Shakir Mohamed, will give a talk on machine learning with social purpose , tackling challenges from healthcare and climate, taking a sociotechnical view, and strengthening global communities.

We’re proud to support the conference as a Platinum Sponsor and to continue working together with our long-term partners LatinX in AI , Queer in AI , and Women in Machine Learning .

At the conference, we’re also showcasing demos on AlphaFold , our advances in fusion science , and new models like PaLM-E for robotics and Phenaki for generating video from text.

Google DeepMind researchers are presenting more than 80 new papers at ICML this year. As many papers were submitted before Google Brain and DeepMind joined forces , papers initially submitted under a Google Brain affiliation will be included in a Google Research blog , while this blog features papers submitted under a DeepMind affiliation.

AI in the (simulated) world

The success of AI that can read, write, and create is underpinned by foundation models – AI systems trained on vast datasets that can learn to perform many tasks. Our latest research explores how we can translate these efforts into the real world, and lays the groundwork for more generally capable and embodied AI agents that can better understand the dynamics of the world, opening up new possibilities for more useful AI tools.

In an oral presentation, we introduce AdA , an AI agent that can adapt to solve new problems in a simulated environment, like humans do. In minutes, AdA can take on challenging tasks: combining objects in novel ways, navigating unseen terrains, and cooperating with other players

Likewise, we show how we could use vision-language models to help train embodied agents – for example, by telling a robot what it’s doing.

The future of reinforcement learning

To develop responsible and trustworthy AI, we have to understand the goals at the heart of these systems. In reinforcement learning, one way this can be defined is through reward.

In an oral presentation, we aim to settle the reward hypothesis first posited by Richard Sutton stating that all goals can be thought of as maximising expected cumulative reward. We explain the precise conditions under which it holds, and clarify the kinds of objectives that can – and cannot – be captured by reward in a general form of the reinforcement learning problem.

When deploying AI systems, they need to be robust enough for the real-world. We look at how to better train reinforcement learning algorithms within constraints , as AI tools often have to be limited for safety and efficiency.

In our research, which was recognised with an ICML 2023 Outstanding Paper Award , we explore how we can teach models complex long-term strategy under uncertainty with imperfect information games . We share how models can play to win two-player games even without knowing the other player's position and possible moves.

Challenges at the frontier of AI

Humans can easily learn, adapt, and understand the world around us. Developing advanced AI systems that can generalise in human-like ways will help to create AI tools we can use in our everyday lives and to tackle new challenges.

One way that AI adapts is by quickly changing its predictions in response to new information. In an oral presentation, we look at plasticity in neural networks and how it can be lost over the course of training – and ways to prevent loss.

We also present research that could help explain the type of in-context learning that emerges in large language models by studying neural networks meta-trained on data sources whose statistics change spontaneously, such as in natural language prediction.

In an oral presentation, we introduce a new family of recurrent neural networks (RNNs) that perform better on long-term reasoning tasks to unlock the promise of these models for the future.

Finally, in ‘ quantile credit assignment ’ we propose an approach to disentangle luck from skill. By establishing a clearer relationship between actions, outcomes, and external factors, AI can better understand complex, real-world environments.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts

Collection  12 March 2023

Journal Top 100 - 2022

This collection highlights our most downloaded* research papers published in 2022. Featuring authors from around the world, these papers highlight valuable research from an international community.

You can also check out the Top 100 across various subject areas here .

*Data obtained from SN Insights, which is based on Digital Science’s Dimensions.

image of abstract blue network

mRNA vaccine-induced antibodies more effective than natural immunity in neutralizing SARS-CoV-2 and its high affinity variants

  • Dominic Esposito

research papers 2023

Cats learn the names of their friend cats in their daily lives

  • Saho Takagi
  • Atsuko Saito
  • Hika Kuroshima

research papers 2023

Metformin administration is associated with enhanced response to transarterial chemoembolization for hepatocellular carcinoma in type 2 diabetes patients

  • Woo Jin Jung
  • Sangmi Jang
  • Jin-Wook Kim

research papers 2023

The impact of digital media on children’s intelligence while controlling for genetic differences in cognition and socioeconomic background

  • Bruno Sauce
  • Magnus Liebherr
  • Torkel Klingberg

research papers 2023

Life tables of annual life expectancy and mortality for companion dogs in the United Kingdom

  • Kendy Tzu-yun Teng
  • Dave C. Brodbelt
  • Dan G. O’Neill

research papers 2023

Bioarchaeological and palaeogenomic portrait of two Pompeians that died during the eruption of Vesuvius in 79 AD

  • Gabriele Scorrano
  • Serena Viva
  • Fabio Macciardi

research papers 2023

Reading on a smartphone affects sigh generation, brain activity, and comprehension

  • Motoyasu Honma
  • Yuri Masaoka
  • Masahiko Izumizaki

research papers 2023

Principal Component Analyses (PCA)-based findings in population genetic studies are highly biased and must be reevaluated

  • Eran Elhaik

research papers 2023

The determinants of COVID-19 morbidity and mortality across countries

  • Dianna Chang
  • Kelvin Jui Keng Tan

research papers 2023

Birdsongs alleviate anxiety and paranoia in healthy participants

  • J. Sundermann

research papers 2023

Identification of ADS024, a newly characterized strain of Bacillus velezensis with direct Clostridiodes difficile killing and toxin degradation bio-activities

  • Michelle M. O’Donnell
  • James W. Hegarty
  • Laurent Chesnel

research papers 2023

Multiple sclerosis genetic and non-genetic factors interact through the transient transcriptome

  • Renato Umeton
  • Gianmarco Bellucci
  • Giovanni Ristori

research papers 2023

The effect of metformin on the survival of colorectal cancer patients with type 2 diabetes mellitus

  • Zeinab Tarhini
  • Kamelia Manceur
  • Niki Christou

research papers 2023

Chemical characterisation of the vapour emitted by an e-cigarette using a ceramic wick-based technology

  • M. Isabel Pinto

research papers 2023

Large-magnitude (VEI ≥ 7) ‘wet’ explosive silicic eruption preserved a Lower Miocene habitat at the Ipolytarnóc Fossil Site, North Hungary

  • Dávid Karátson
  • Imre Szarvas

research papers 2023

Far-UVC (222 nm) efficiently inactivates an airborne pathogen in a room-sized chamber

  • Waseem Hiwar
  • Kenneth Wood

research papers 2023

Low dose aspirin associated with greater bone mineral density in older adults

  • Hongzhan Liu
  • Xungang Xiao

research papers 2023

First direct evidence of adult European eels migrating to their breeding place in the Sargasso Sea

  • Rosalind M. Wright
  • Adam T. Piper
  • David Righton

research papers 2023

Infections with the SARS-CoV-2 Delta variant exhibit fourfold increased viral loads in the upper airways compared to Alpha or non-variants of concern

  • Christian J. H. von Wintersdorff
  • Jozef Dingemans
  • Paul H. M. Savelkoul

research papers 2023

Inappropriate sinus tachycardia in post-COVID-19 syndrome

  • Júlia Aranyó
  • Victor Bazan
  • Roger Villuendas

research papers 2023

The microstructure and the origin of the Venus from Willendorf

  • Gerhard W. Weber
  • Alexander Lukeneder

research papers 2023

COVID-19 reinfections among naturally infected and vaccinated individuals

  • Sezanur Rahman
  • M. Mahfuzur Rahman
  • Mustafizur Rahman

research papers 2023

Lockdown measures during the COVID-19 pandemic strongly impacted the circulation of respiratory pathogens in Southern China

  • Heping Wang
  • Yuejie Zheng
  • Wenjian Wang

research papers 2023

Alzheimer’s disease large-scale gene expression portrait identifies exercise as the top theoretical treatment

  • Mason A. Hill
  • Stephen C. Gammie

research papers 2023

COVID-19 symptoms are reduced by targeted hydration of the nose, larynx and trachea

  • Carolin Elizabeth George
  • Gerhard Scheuch
  • David A. Edwards

research papers 2023

SARS-CoV-2 spike protein induces cognitive deficit and anxiety-like behavior in mouse via non-cell autonomous hippocampal neuronal death

  • Junyoung Oh
  • Woo-Hyun Cho
  • Sung Joong Lee

research papers 2023

Abdominal pain patterns during COVID-19: an observational study

  • Alexandre Balaphas
  • Kyriaki Gkoufa
  • Christian Toso

research papers 2023

Detection of human pathogenic bacteria in rectal DNA samples from Zalophus californianus in the Gulf of California, Mexico

  • Francesco Cicala
  • David Ramírez-Delgado
  • Alexei F. Licea-Navarro

research papers 2023

Industrialised fishing nations largely contribute to floating plastic pollution in the North Pacific subtropical gyre

  • Laurent Lebreton
  • Sarah-Jeanne Royer
  • Matthias Egger

research papers 2023

Hypertension and diabetes including their earlier stage are associated with increased risk of sudden cardiac arrest

  • Seung Young Roh
  • Young-Hoon Kim

research papers 2023

Utility of an artificial intelligence system for classification of esophageal lesions when simulating its clinical use

  • Ayaka Tajiri
  • Ryu Ishihara
  • Tomohiro Tada

research papers 2023

Prevalence, age of decision, and interpersonal warmth judgements of childfree adults

  • Zachary P. Neal
  • Jennifer Watling Neal

research papers 2023

Acute and protracted abstinence from methamphetamine bidirectionally changes intrinsic excitability of indirect pathway spiny projection neurons in the dorsomedial striatum

  • Sanghoon Choi
  • Steven M. Graves

research papers 2023

Indeterminacy of cannabis impairment and ∆ 9 -tetrahydrocannabinol (∆ 9 -THC) levels in blood and breath

  • Gregory T. Wurz
  • Michael W. DeGregorio

research papers 2023

High rates of plasmid cotransformation in E. coli overturn the clonality myth and reveal colony development

  • Delia Tomoiaga
  • Jaclyn Bubnell
  • Paul Feinstein

research papers 2023

Metformin sensitizes leukemic cells to cytotoxic lymphocytes by increasing expression of intercellular adhesion molecule-1 (ICAM-1)

  • Nerea Allende-Vega
  • Joaquin Marco Brualla
  • Martin Villalba

research papers 2023

Incorporation of machine learning and deep neural network approaches into a remote sensing-integrated crop model for the simulation of rice growth

  • Seungtaek Jeong
  • Jong-min Yeom

research papers 2023

Perceiving societal pressure to be happy is linked to poor well-being, especially in happy nations

  • Egon Dejonckheere
  • Joshua J. Rhee
  • Brock Bastian

research papers 2023

The earliest Pleistocene record of a large-bodied hominin from the Levant supports two out-of-Africa dispersal events

  • Alon Barash
  • Miriam Belmaker

research papers 2023

Generation mechanism and prediction of an observed extreme rogue wave

  • Johannes Gemmrich

research papers 2023

Fitness tracking reveals task-specific associations between memory, mental health, and physical activity

  • Jeremy R. Manning
  • Gina M. Notaro
  • Paxton C. Fitzpatrick

Domestic dogs ( Canis familiaris ) grieve over the loss of a conspecific

  • Stefania Uccheddu
  • Lucia Ronconi
  • Federica Pirrone

research papers 2023

Human transgenerational observations of regular smoking before puberty on fat mass in grandchildren and great-grandchildren

  • Jean Golding
  • Steve Gregory
  • Matthew Suderman

research papers 2023

Chlamydia pneumoniae can infect the central nervous system via the olfactory and trigeminal nerves and contributes to Alzheimer’s disease risk

  • Jenny A. K. Ekberg

research papers 2023

Oxycodone/naloxone versus tapentadol in real-world chronic non-cancer pain management: an observational and pharmacogenetic study

  • Jordi Barrachina
  • Cesar Margarit
  • Ana M. Peiró

Cooking methods are associated with inflammatory factors, renal function, and other hormones and nutritional biomarkers in older adults

  • Montserrat Rodríguez-Ayala
  • José Ramón Banegas
  • Pilar Guallar-Castillón

research papers 2023

Classification of pig calls produced from birth to slaughter according to their emotional valence and context of production

  • Elodie F. Briefer
  • Ciara C.-R. Sypherd
  • Céline Tallet

research papers 2023

Higher emotional awareness is associated with greater domain-general reflective tendencies

  • Michelle Persich
  • William D. S. Killgore

research papers 2023

A large Megaraptoridae (Theropoda: Coelurosauria) from Upper Cretaceous (Maastrichtian) of Patagonia, Argentina

  • Alexis M. Aranciaga Rolando
  • Matias J. Motta
  • Fernando E. Novas

research papers 2023

Long COVID occurrence in COVID-19 survivors

  • Aya Sugiyama
  • Junko Tanaka

research papers 2023

Water activated disposable paper battery

  • Alexandre Poulin
  • Xavier Aeby
  • Gustav Nyström

research papers 2023

Intestinal preservation in a birdlike dinosaur supports conservatism in digestive canal evolution among theropods

  • Yichuan Liu

research papers 2023

Antiviral effect of cetylpyridinium chloride in mouthwash on SARS-CoV-2

  • Hirofumi Sawa

research papers 2023

Evidence of an oceanic impact and megatsunami sedimentation in Chryse Planitia, Mars

  • J. Alexis P. Rodriguez
  • Darrel K. Robertson
  • Mario Zarroca

research papers 2023

Curcumin and metformin synergistically modulate peripheral and central immune mechanisms of pain

  • Peththa Wadu Dasuni Wasana
  • Pasarapa Towiwat

research papers 2023

The first occurrence of an avian-style respiratory infection in a non-avian dinosaur

  • D. Cary Woodruff
  • Ewan D. S. Wolff
  • Lawrence M. Witmer

research papers 2023

Optimal linear estimation models predict 1400–2900 years of overlap between Homo sapiens and Neandertals prior to their disappearance from France and northern Spain

  • Igor Djakovic
  • Alastair Key
  • Marie Soressi

research papers 2023

The influence of time on the sensitivity of SARS-CoV-2 serological testing

  • Arturo Torres Ortiz
  • Fernanda Fenn Torrente
  • Louis Grandjean

research papers 2023

Online misinformation is linked to early COVID-19 vaccination hesitancy and refusal

  • Francesco Pierri
  • Brea L. Perry
  • John Bryden

research papers 2023

A distinct symptom pattern emerges for COVID-19 long-haul: a nationwide study

  • Melissa D. Pinto
  • Charles A. Downs
  • Natalie Lambert

research papers 2023

SARS-CoV-2-reactive IFN-γ-producing CD4 + and CD8 + T cells in blood do not correlate with clinical severity in unvaccinated critically ill COVID-19 patients

  • Beatriz Olea
  • Eliseo Albert
  • David Navarro

research papers 2023

Classification of 74 facial emoji’s emotional states on the valence-arousal axes

  • Gaku Kutsuzawa
  • Hiroyuki Umemura
  • Yoshiyuki Kobayashi

research papers 2023

The emergence of a new sex-system (XX/XY 1 Y 2 ) suggests a species complex in the “monotypic” rodent Oecomys auyantepui (Rodentia, Sigmodontinae)

  • Willam Oliveira da Silva
  • Celina Coelho Rosa
  • Cleusa Yoshiko Nagamachi

research papers 2023

Detection of COVID-19 using multimodal data from a wearable device: results from the first TemPredict Study

  • Ashley E. Mason
  • Frederick M. Hecht
  • Benjamin L. Smarr

research papers 2023

Spinal degeneration is associated with lumbar multifidus morphology in secondary care patients with low back or leg pain

  • Jeffrey R. Cooley
  • Tue S. Jensen
  • Jeffrey J. Hebert

research papers 2023

Phenomenology and content of the inhaled N , N -dimethyltryptamine ( N , N -DMT) experience

  • David Wyndham Lawrence
  • Robin Carhart-Harris
  • Christopher Timmermann

research papers 2023

A gigantic bizarre marine turtle (Testudines: Chelonioidea) from the Middle Campanian (Late Cretaceous) of South-western Europe

  • Oscar Castillo-Visa
  • Àngel H. Luján
  • Albert Sellés

research papers 2023

The first experience with fully endoscopic posterior cervical foraminotomy and discectomy for radiculopathy performed in Viet Duc University Hospital

  • Son Ngoc Dinh
  • Hung The Dinh

research papers 2023

Mapping the “catscape” formed by a population of pet cats with outdoor access

  • Richard Bischof
  • Nina Rosita Hansen
  • Torbjørn Haugaasen

research papers 2023

Investigation of humans individual differences as predictors of their animal interaction styles, focused on the domestic cat

  • Lauren R. Finka
  • Lucia Ripari
  • Marnie L. Brennan

research papers 2023

Genesis of fecal floatation is causally linked to gut microbial colonization in mice

  • Syed Mohammed Musheer Aalam
  • Daphne Norma Crasta
  • Nagarajan Kannan

research papers 2023

Young children’s screen time during the first COVID-19 lockdown in 12 countries

  • Christina Bergmann
  • Nevena Dimitrova
  • Nivedita Mani

research papers 2023

Cichlids and stingrays can add and subtract ‘one’ in the number space from one to five

  • V. Schluessel

research papers 2023

Elevated estradiol levels in frozen embryo transfer have different effects on pregnancy outcomes depending on the stage of transferred embryos

  • Liming Ruan

research papers 2023

Group VR experiences can produce ego attenuation and connectedness comparable to psychedelics

  • David R. Glowacki
  • Rhoslyn Roebuck Williams
  • Mike Chatziapostolou

research papers 2023

New therizinosaurid dinosaur from the marine Osoushinai Formation (Upper Cretaceous, Japan) provides insight for function and evolution of therizinosaur claws

  • Yoshitsugu Kobayashi
  • Ryuji Takasaki
  • Yoshinori Hikida

research papers 2023

Smartphone-based ecological momentary assessment reveals mental health benefits of birdlife

  • Ryan Hammoud
  • Stefania Tognin
  • Andrea Mechelli

research papers 2023

Long-term outcomes of cataract surgery with toric intraocular lens implantation by the type of preoperative astigmatism

  • Tetsuro Oshika
  • Shinichiro Nakano
  • Tsutomu Kaneko

research papers 2023

Forest fire detection system using wireless sensor networks and machine learning

  • Udaya Dampage
  • Lumini Bandaranayake
  • Bathiya Jayasanka

research papers 2023

Misinformation of COVID-19 vaccines and vaccine hesitancy

  • Sun Kyong Lee
  • Juhyung Sun
  • Shane Connelly

research papers 2023

Deep language algorithms predict semantic comprehension from brain activity

  • Charlotte Caucheteux
  • Alexandre Gramfort
  • Jean-Rémi King

research papers 2023

Children with autism spectrum disorder show atypical electroencephalographic response to processing contextual incongruencies

  • Amparo V. Márquez-García
  • Vasily A. Vakorin
  • Sam M. Doesburg

research papers 2023

A generalizable one health framework for the control of zoonotic diseases

  • Ria R. Ghai
  • Ryan M. Wallace
  • Casey Barton Behravesh

research papers 2023

HS3ST2 expression induces the cell autonomous aggregation of tau

  • M. B. Huynh
  • N. Rebergue
  • D. Papy-Garcia

research papers 2023

Exceptional warming over the Barents area

  • Ketil Isaksen
  • Øyvind Nordli
  • Tatiana Karandasheva

research papers 2023

A new Early Cretaceous lizard in Myanmar amber with exceptionally preserved integument

  • Andrej Čerňanský
  • Edward L. Stanley
  • Susan E. Evans

research papers 2023

Coffee consumption and diabetic retinopathy in adults with diabetes mellitus

  • Hak Jun Lee
  • Daniel Duck-Jin Hwang

research papers 2023

Shifts in the foraging tactics of crocodiles following invasion by toxic prey

  • Abhilasha Aiyer
  • Richard Shine
  • Georgia Ward-Fear

research papers 2023

Production of high loading insulin nanoparticles suitable for oral delivery by spray drying and freeze drying techniques

  • Alberto Baldelli
  • Anubhav Pratap-Singh

research papers 2023

Cable news and COVID-19 vaccine uptake

  • Matteo Pinna
  • Christoph Goessmann

research papers 2023

Estimating the time of last drinking from blood ethyl glucuronide and ethyl sulphate concentrations

  • Zhongyuan Guo

research papers 2023

COVID-19 infections in infants

  • Małgorzata Sobolewska-Pilarczyk
  • Maria Pokorska-Śpiewak
  • Małgorzata Pawłowska

research papers 2023

COVID-19 increases the risk for the onset of atrial fibrillation in hospitalized patients

  • Jakob Wollborn
  • Sergey Karamnov
  • Jochen D. Muehlschlegel

research papers 2023

Childhood temperament and adulthood personality differentially predict life outcomes

  • Amanda J. Wright
  • Joshua J. Jackson

research papers 2023

Antivirus applied to JAR malware detection based on runtime behaviors

  • Ricardo P. Pinheiro
  • Sidney M. L. Lima
  • Wellington P. dos Santos

research papers 2023

Therapeutic enzyme engineering using a generative neural network

  • Andrew Giessel
  • Athanasios Dousis
  • Stuart Licht

research papers 2023

Identification of genes associated with human-canine communication in canine evolution

  • Akiko Tonoike
  • Ken-ichi Otaki
  • Miho Nagasawa

research papers 2023

Breath chemical markers of sexual arousal in humans

  • G. Pugliese
  • J. Williams

research papers 2023

A 5-km-thick reservoir with > 380,000 km 3 of magma within the ancient Earth's crust

  • Rais Latypov
  • Sofya Chistyakova
  • Mauritz van der Merwe

research papers 2023

Return of large fin whale feeding aggregations to historical whaling grounds in the Southern Ocean

  • Helena Herr
  • Sacha Viquerat
  • Bettina Meyer

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

research papers 2023

mRNA Technology - Messenger RNA - Two Strands of mRNA on Abstract Technology Background

Nicola Jones, Knowable Magazine Nicola Jones, Knowable Magazine

Leave your feedback

  • Copy URL https://www.pbs.org/newshour/science/a-look-at-the-top-science-stories-and-breakthroughs-of-2023

A look at the top science stories and breakthroughs of 2023

As 2023 rolls to a close, Knowable Magazine has looked back over its articles and canvassed editorial committee members from the 51 academic journals — covering analytical chemistry to vision science — published by Knowable ’s parent company, Annual Reviews. From good news to bad, from novel vaccines to insect invaders, this year left us with much to ponder. Here we present 12 newsworthy developments from 2023.

Jabs for hope

Hot on the heels of the COVID-19 vaccine success story (including updated jabs that target Omicron subvariants of the rapidly shifting virus), 2023 saw the greenlighting of several new vital vaccines. Abrysvo and Arexvy, the first vaccines against respiratory syncytial virus ( RSV ), a cold-like virus that can be dangerous for the old or the young, are now available in the United States and elsewhere. And the World Health Organization has recommended a second malaria vaccine, R21 , following RTS,S in 2021. RTS,S has already been given to nearly 2 million children in Africa; the new vaccine is about half the price.

This double hit against malaria is a “huge win” for kids, says Matthew Laurens, a pediatric infectious disease specialist at the University of Maryland School of Medicine in Baltimore, who wrote about malaria vaccines in a 2022 opinion article for Knowable . “Like COVID-19, we need multiple malaria vaccines if we’re to succeed in combating this deadly disease.”

Scary smarts

One of the biggest newsmakers of the year was artificial intelligence (AI). San Francisco tech company OpenAI’s conversational bot ChatGPT, first launched in November 2022, was estimated to have more than 100 million monthly users by January 2023. People were simultaneously impressed and appalled by the capacity of AI based on deep learning (a technique inspired by the human brain) to write everything from poetry to class essays and research papers.

“In terms of public interest, I have not seen anything like this in my 30-year career,” says Colin Phillips, a psycholinguist at the University of Maryland and co-editor of the Annual Review of Linguistics .

WATCH: ‘Godfather of AI’ discusses dangers the developing technologies pose to society

Rapidly improving AI has left governments, scientists and consumers alike wondering how best to harness its abilities and guard against its misuse, including the deepfakes now featuring in scams and propaganda. International leaders agreed to work together to guide the technology at the UK’s AI Safety Summit in November — hoping to get regulations in place before computers grow smarter than people.

Wild weather

News reports of broken heat records are starting to sound like, well, broken records. But 2023 really was a standout: The planet had its hottest year on record. As of October, it was about 1.4 degrees Celsius warmer than the 1850–1900 average, topping the previous greatest above-average heat bumps of about 1.3 degrees C in both 2020 and 2016.

This extreme heat of 2023 resulted from both long-term climate change trends and the year’s El Niño, a natural climate pattern that, overall, tends to make the world warmer. This was the hottest summer since recordkeeping began in 1880, and September was by far the most weirdly warm month ever seen.

These trends have been shown to play a role in much of 2023’s wild and destructive weather, from Canada’s wildfires to Libya’s floods . Researchers suspect that the planet will hit a long-term average of 1.5 degrees C warming — a commonly quoted target for maximum warming — sometime in the early 2030s.

READ MORE: As the threat of wildfires rises, groups tasked with fighting them turn to AI for help

“Climate change is no longer about our grandchildren or polar bears — it is here, and now affecting everyone, everywhere on the planet, but especially devastating for the poor,” says Diana Ürge-Vorsatz, an environmental scientist and climate expert at Central European University and vice-chair of the Intergovernmental Panel on Climate Change. Ürge-Vorsatz co-penned an editorial calling for action against environmental crises in 2022’s volume of the Annual Review of Environment and Resources , for which she is a committee member.

Everything electric to end emissions

In December, delegates at the United Nations climate change convention discussed the first official inventory of our actions to combat global warming. The “ global stocktake ” concluded that while the world is making some progress and it will be possible to reach the Paris goal of limiting global warming to 2 degrees Celsius, leaders are going to have to accelerate action to get there.

For now, fossil fuel production remains too high for climate targets. But a Climate Analytics report says that there’s a 70 percent chance that greenhouse gas emissions will fall in 2024, making 2023 the “peak” year . Of course, getting away from fossil fuels means ramping up alternative energy sources. Renewables are soaring — particularly solar , and particularly in China.

READ MORE: How to slash emissions across the U.S. economy, according to experts

“Prices fell and penetration increased exceeding all projections,” says Ürge-Vorsatz of renewables. “In the first half of 2023, several countries have produced over three-quarters of their electricity from weather-dependent renewable forms of energy — still often deemed impossible by many experts.” At the December UN meeting, nations pledged to triple the planet’s renewable energy capacity by 2030.

New batteries in development will also help — 2023 saw a lab breakthrough in developing “ lithium air ” batteries. Meanwhile, researchers note some signs of hope that nuclear fusion might one day be feasible. The National Ignition Facility, an experimental laser-based fusion device at Lawrence Livermore National Laboratory in California, has produced slightly more energy than it used a total of four times since December 2022.

Fancy feast

As the world’s population grows, the quest continues for alternative high-protein foods that might mimic the sensory pleasures of meat without the attendant environmental problems from deforestation, greenhouse gas emissions and more. One option now on US plates is lab-grown meat , which was approved by regulators in June 2023, making the United States the second country to move “cellular meat” to market. Meanwhile, companies are also pursuing ever-better ways to make high-protein foods out of everything from insects to filamentous fungi to microbes that can convert air and hydrogen into edible food.

It’s exciting to see lab-grown meat finally reach the market, says Julian McClements, a food scientist at the University of Massachusetts Amherst and editor of the Annual Review of Food Science and Technology , who has written about next-generation plant-based foods . Scaling up that tech, he says, “has potential to create a more healthy, sustainable and ethical food supply.” At the same time, many nutrition experts are raising the alarm about ultraprocessed foods , and foods packed with sugars, salts and fats to increase desirability. Another more sustainable and healthier option to the world’s current diet would be to just eat more plants.

Efforts to better understand the human body in health and disease got a boost this year with several projects aiming to map out vital organs and improve diversity in medical datasets. “It’s really an exciting time,” says Sarah Teichmann, co-lead of the Human Cell Atlas initiative and a member of the Annual Review of Genomics and Human Genetics editorial committee.

In June, researchers unveiled a comprehensive atlas of the lung , compiled from studies of 2.4 million cells in 486 people and highlighting cellular features common in cancer and COVID-19. In October, the largest-yet brain atlas was released, including more than 3,000 cell types, some of them new to science. Researchers are also expanding efforts to sequence and study the genomes of ever more people on this planet, hoping to shift medical datasets away from a current, common bias toward men of European descent. In October, a plan was launched to create the largest-yet database of genomes from people of African ancestry. All these efforts “could help lead to global democratization of health care in the future,” says Teichmann.

Ocean waves

For the oceans in 2023, “it was the best of times, it was the worst of times,” says Nancy Knowlton, a marine biologist with the Smithsonian National Museum of Natural History in Washington, DC, who wrote about reasons to be optimistic about ocean health in the 2021 Annual Review of Marine Science . On one hand, beleaguered global oceans hit a record high temperature in April and in August (near the tail end of the summer season for the global south and north, respectively), with “seas as hot as a hot tub,” says Knowlton. On the other hand, she says, 2023 saw “major steps being taken to reverse the trajectory of ocean decline.”

READ MORE: The race to rescue corals from a blistering marine heat wave

That includes a High Seas Treaty , agreed upon in March after years of effort, to provide more oversight of international waters. The treaty carves out ways to share benefits from genetic resources dug from the deep, and to create marine protected areas far from any national shores. Meanwhile, progress was made on a separate treaty aimed at eliminating plastic pollution — including the single-use plastics that plague marine environments . That treaty, due in 2024, might cap plastic production, better regulate recycling and promote more sustainable, healthier materials — like bioplastics or novel uses of wood .

Insect invaders

The insects in the spotlight this year were bedbugs, which ravaged first Paris (during Fashion Week, no less) and then Asia. But buggy concerns go far beyond this; a raft of far more damaging pests are also on the move, devastating crops and forests around the world. In September, a report from the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services (IPBES) reported that alien invasives, including insects, are a major factor in 60 percent of species extinctions. But while pests are spreading and making pests of themselves, there’s a parallel problem of insect decline (sometimes called the “insect apocalypse”), though numbers are still scant to document the collapse among our planet’s 5.5 million species of insects .

“Insects are not optional; they are the little things that run the world and if they were to disappear, humans would last but a few months,” says University of Delaware entomologist Doug Tallamy (read his 2023 interview with Knowable ). Researchers are investigating new angles for insect conservation, including using genomics to track and assist the creatures’ ability to adapt.

Transplant tech

Lab advances are promising hope for people in need of organ transplants. This year, medical researchers for the first time managed to transplant previously frozen organs: In a landmark study published in June, rats successfully received kidneys that had been cryogenically frozen for 100 days. Researchers also made great strides in exploring medical use of organs from animals : Last year, a 57-year-old man with terminal heart illness survived for two months after receiving a pig heart. In 2023, researchers reported that a monkey survived an amazing two years with a pig kidney, thanks in part to genetic modification.

“Organ transplantation is close to my heart, as some family members have been recipients of kidney transplants,” says Edgar Arriaga, a member of the Annual Review of Analytical Chemistry editorial committee who applies chemistry and engineering to biomedical challenges. The new developments “shine renewed optimism onto many people whose only hope for having a normal life is a functional organ.”

Reaching for stars

India became the fourth country to successfully put a lander on the Moon, to great fanfare. And NASA announced its intended crew for the next planned trip to the Moon (which will be in 2024 at the earliest). The four-person crew includes the first woman, the first person of color and the first non-American to head to the Moon.

Meanwhile, researchers looking far beyond the Moon to the stars now have a better tool in their toolkit: code that, finally, treats stars as the somewhat flattened, rotating, evolving balls that they are, rather than assuming they are perfect spheres . “At long last, this paper comes up with better models,” says Conny Aerts, an astrophysicist at KU Leuven in Belgium and a member of the Annual Review of Astronomy and Astrophysics editorial committee. “This is a remarkable achievement of major importance for astrophysics, because almost everyone in our field relies on stellar models.”

Fighting fat

The World Obesity Federation’s 2023 atlas predicts that more than half of the global population will be obese or overweight by 2035 — but new, effective drugs are emerging based on a better understanding of the hormones that control body weight. Many previous weight loss drugs targeted neurotransmitters such as norepinephrine to hit satiety and hunger centers in the brain. A new strategy instead targets the gut hormone GLP-1 (glucagon-like peptide 1), with a swath of benefits ranging from appetite suppression to blood sugar control.

WATCH: How new weight loss drugs are changing the conversation around treating obesity

New, effective drugs are emerging based on a better understanding of the hormones that control body weight.

The GLP-1-targeting drug Wegovy, approved in 2021, has proved wildly popular for weight loss, and this year a study showed that it could address heart problems in some patients, too . In November, a competitor, Zepbound, was also approved for weight loss in the United States. These developments are expected to lower the price on these expensive, injectable drugs. “This is truly an exciting and propitious time to be caring for individuals with the disease of obesity,” write endocrinologists Ania Jastreboff and Robert Kushner in an article tackling the subject in the Annual Review of Medicine .

Gene editing

In November, the UK medicines regulatory agency became the first in the world to approve a therapy that uses CRISPR gene editing — a revolutionary biotechnology that snips at DNA like a molecular scalpel. The United States followed suit in December . The treatment, called Casgevy, helps people with conditions caused by defective hemoglobin production or function, including sickle cell disease. The therapy is started by taking blood-producing cells out of the bone marrow of patients. The cells are genetically altered in the lab so that they produce fetal rather than adult hemoglobin, then infused back into the patient.

WATCH: Why the FDA’s approval of revolutionary sickle cell gene therapy is a ‘big deal’

“The CRISPR revolution is the fastest advance in biomedicine I have seen,” says Donald Kohn, a medical geneticist at UCLA and coauthor of a recent overview of gene therapy in the Annual Review of Medicine . “This approval is just the first of many gene medicines to come.” CRISPR therapies are also being developed to tackle cancers, blindness, HIV, diabetes and more.

This  article  originally appeared in  Knowable Magazine , an independent journalistic endeavor from Annual Reviews.

Nicola Jones is a freelance science reporter and editor who lives in Pemberton, British Columbia.

Support Provided By: Learn more

Educate your inbox

Subscribe to Here’s the Deal, our politics newsletter for analysis you won’t find anywhere else.

Thank you. Please check your inbox to confirm.

research papers 2023

Subscribe to the PwC Newsletter

Join the community, trending research, mora: enabling generalist video generation via a multi-agent framework.

lichao-sun/mora • 20 Mar 2024

Sora is the first large-scale generalist video generation model that garnered significant attention across society.

Evolutionary Optimization of Model Merging Recipes

research papers 2023

Surprisingly, our Japanese Math LLM achieved state-of-the-art performance on a variety of established Japanese LLM benchmarks, even surpassing models with significantly more parameters, despite not being explicitly trained for such tasks.

FeatUp: A Model-Agnostic Framework for Features at Any Resolution

Deep features are a cornerstone of computer vision research, capturing image semantics and enabling the community to solve downstream tasks even in the zero- or few-shot regime.

research papers 2023

One-Step Image Translation with Text-to-Image Models

In this work, we address two limitations of existing conditional diffusion models: their slow inference speed due to the iterative denoising process and their reliance on paired data for model fine-tuning.

research papers 2023

MVSplat: Efficient 3D Gaussian Splatting from Sparse Multi-View Images

research papers 2023

We propose MVSplat, an efficient feed-forward 3D Gaussian Splatting model learned from sparse multi-view images.

research papers 2023

GRM: Large Gaussian Reconstruction Model for Efficient 3D Reconstruction and Generation

justimyhxu/grm • 21 Mar 2024

We introduce GRM, a large-scale reconstructor capable of recovering a 3D asset from sparse-view images in around 0. 1s.

T-Rex2: Towards Generic Object Detection via Text-Visual Prompt Synergy

idea-research/t-rex • 21 Mar 2024

Recognizing the complementary strengths and weaknesses of both text and visual prompts, we introduce T-Rex2 that synergizes both prompts within a single model through contrastive learning.

research papers 2023

Analyzing and Improving the Training Dynamics of Diffusion Models

Diffusion models currently dominate the field of data-driven image synthesis with their unparalleled scaling to large datasets.

research papers 2023

LLM4Decompile: Decompiling Binary Code with Large Language Models

Therefore, we release the first open-access decompilation LLMs ranging from 1B to 33B pre-trained on 4 billion tokens of C source code and the corresponding assembly code.

LLMLingua-2: Data Distillation for Efficient and Faithful Task-Agnostic Prompt Compression

The challenge is that information entropy may be a suboptimal compression metric: (i) it only leverages unidirectional context and may fail to capture all essential information needed for prompt compression; (ii) it is not aligned with the prompt compression objective.

research papers 2023

Stanford Woods Institute for the Environment

2023 Stanford environmental research: A year in review

A new report looks back at the most impactful environment and sustainability research from Stanford scholars in 2023.

A field of yellow and purple flowers during a wildflower superbloom in California

Each year, researchers at Stanford produce hundreds of studies that advance our knowledge of environmental systems and generate innovative solutions to some of the most pressing energy, ecology, and sustainability challenges.

The  Stanford Environmental Research Year in Review , produced by the  Woods Institute for the Environment , provides a snapshot of key studies from scholars across Stanford’s seven schools. These publications demonstrate how Stanford faculty, students, postdoctoral scholars, and research staff are building connections between knowledge generation and scalable impact.

research papers 2023

Download the Stanford Environmental Research 2023 Year in Review

This year’s review spans a wide range of topics that can inform environmental policies, technology, conservation, business, and decision-making, including:

  • Incorporating justice and equity frameworks into conservation and urban access to nature
  • Wildfire management, public health impacts, and policy recommendations to support the firefighting workforce
  • Climate-resilient approaches for designing marine protected areas and adapting to coastal flooding
  • Water security and new technology for wastewater treatment and disinfection
  • Interconnectedness of biodiversity and food security
  • Pathways to upcycle materials for sustainable infrastructure

The examples highlighted in the Stanford Environmental Research Year in Review are far from exhaustive, but they illustrate the breadth and depth of expertise brought to collaborative partnerships at the university and beyond. In total, Stanford scholars produced more than 700 peer-reviewed publications related to the environment and sustainability in 2023.

View the 2023 publications collection

To learn more:

Madison Pobis Stanford Woods Institute for the Environment [email protected]

Explore More

A cream-colored branched coral on the ocean floor

People&Planet | March 2024

  • People&Planet

research papers 2023

Understanding and preventing conflict over water

Stanford water and climate experts discuss climate impacts on shared water sources and potential solutions.

Helping coral reefs adapt to a warming ocean

Stanford researchers are searching for heat-resistant corals that could ensure the survival of vulnerable reefs.

  • In Focus: Big Ideas for Oceans

Highly Cited Researchers 2023

Highly cited researchers have demonstrated significant and broad influence in their field(s) of research..

Each researcher selected has authored multiple Highly Cited Papers™ which rank in the top 1% by citations for their field(s) and publication year in the Web of Science™ over the past decade. However, citation activity is not the sole selection indicator. A preliminary list based on citation activity is then refined using qualitative analysis and expert judgement.

Of the world’s population of scientists and social scientists, Highly Cited Researchers™ are 1 in 1,000.

2023 Analysis

Experts from the Institute for Scientific Information™ provide their detailed insight into the list of Highly Cited Researchers 2023, including their geographical locations, primary tenured research institutes and a breakdown of their fields of research.

Read our analysis of the 2023 list.

You have exceeded your limit for simultaneous device logins.

Your current subscription allows you to be actively logged in on up to three (3) devices simultaneously. click on continue below to log out of other sessions and log in on this device., best databases 2023.

research papers 2023

Databases foster deep research, expansive reading, and a myriad of inquiry avenues. These 10 tools, covering food, Shakespeare, study skills, and much more, are our selections for the best databases of 2023.

research papers 2023

Bloomsbury Food Library .   Bloomsbury

Through reference books, podcasts, research and learning tools, and image collections, researchers can explore a myriad of topics, including food waste, fast food, organic farming, food inequality, cultural food histories, and more in this comprehensive, multi-disciplinary resource. The database covers the global history of food from prehistory to the present day. Content is available across three beautifully presented collections—the Core Collection, Global Food Histories, and Food Sustainability and Security.

Conflict in Indochina: Foreign Office Files for Vietnam, Laos, and Cambodia, 1959–1964 .   AM

This first installment in AM’s two-part module covering the Vietnam War draws upon reports, correspondence, maps, photographs, newspaper clippings, and economic data from the UK’s National Archives to illuminate the political, social, and cultural upheaval wrought by years of instability and conflict. This sophisticated resource examines the role of regional actors within the area, from Chinese and Soviet intervention to the complexities of Anglo American relations.

Gale Business: Plan Builder .   Gale

With this step-by-step online planning tool, entrepreneurs can learn how to start, manage, evaluate, and optimize a business or nonprofit organization. With the help of metrics, charts, guides, questionnaires, and worksheets, aspiring or current business owners can analyze start-up ideas, develop comprehensive business plans, and identify costs, taxes, and sales targets. A suite of practical, analytical tools assists users through every step of the process.

LGBT Magazine Archive .  ProQuest

With documents dating back to 1945, the first collection in ProQuest’s LGBT Magazine Archive offers a panoramic selection of leading magazines and serials serving LGBTQIA+ people. These difficult-to-find publications address a range of topics, from activism and politics to mental health, lifestyle, and arts and literature. Notable titles include Gay News, Gay Times, The Pink Paper, Man and Society, and Transgender Tapestry.

Platino Educa . Platino Educa

Created by and for teachers in Spain and Latin America, and now expanded into the U.S. higher-education market, this outstanding resource presents unlimited and exceptional access to Spanish- and Portuguese-language films for teaching and learning. In addition to over 330 feature films, shorts, and documentaries, many of which are unavailable for streaming elsewhere, the site supplies supplementary teaching guides with learning objectives, discussion prompts, filmmaker profiles, worksheets, and more.

ProQuest One Psychology .  ProQuest

This impressive database assists students exploring psychological research methodologies, theories, and therapies, through user-friendly access to a robust collection of curated multiformat materials. Content includes scholarly journals, reference works, dissertations, and access to thousands of videos and transcripts of counseling sessions. Interactive topic pages cover subjects ranging from anxiety and depression to cognitive behavioral therapy, through journal articles, books, related therapy videos, and more. 

Sage Skills: Student Success .  Sage

An interactive digital resource designed to help students practice essential skills to ensure academic success. The platform comprises 10 comprehensive modules covering critical interdisciplinary skills such as academic writing, data literacy, information literacy, study strategies, and even personal development and well-being. Each module employs different learning modalities (including self-guided assessments, videos, and interactive scenarios) to take advantage of varied learning styles and approaches.

Secret Files from World Wars to Cold War .  Coherent Digital

A superb primary-source collection offering students and researchers access to over 12,000 secret intelligence files, revealing previously unavailable information about the Spanish Civil War, WWII, the Korean War, and the early years of the Cold War. Users will find maps, meeting minutes, memoranda, correspondence, and more drawn from the UK National Archives, made searchable through the platform’s sophisticated keyword and proximity finder tools. 

Shakespeare’s Globe to Globe Festival on Drama Online .  Bloomsbury

A must for scholars, performers, and students of Shakespeare and intercultural theater, this rich database features filmed performances of Shakespeare’s plays, presented by companies from all over the world in their native languages and with their own sensibilities and styles. Performances are given in languages that range from Castilian Spanish to Shona, Yoruba, and Turkish. English subtitles and transcripts make for a frictionless viewing experience.

State Papers Online Colonial: Asia (Far East, Hong Kong, and Wei-Hai-Wei), Part I .  Gale

The first installment in Gale’s four-part State Papers Online Colonial: Asia database draws upon recently digitized documents from the UK’s National Archives to provide insight into the early history of the East India Company, subsequent colonial governance, and the decolonization in the Far East, Hong Kong, and Wei-Hai-Wei. Materials include original correspondence, calendars, monographs, maps, photographs, and more, all of which are superbly imaged, allowing for rich study and exploration.

Get Print. Get Digital. Get Both!

Add comment :-, comment policy:.

  • Be respectful, and do not attack the author, people mentioned in the article, or other commenters. Take on the idea, not the messenger.
  • Don't use obscene, profane, or vulgar language.
  • Stay on point. Comments that stray from the topic at hand may be deleted.
  • Comments may be republished in print, online, or other forms of media.
  • If you see something objectionable, please let us know. Once a comment has been flagged, a staff member will investigate.

First Name should not be empty !!!

Last Name should not be empty !!!

email should not be empty !!!

Comment should not be empty !!!

You should check the checkbox.

Please check the reCaptcha

research papers 2023

Ethan Smith

Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book.

Posted 6 hours ago REPLY

Jane Fitgzgerald

Posted 6 hours ago

Michael Woodward

Continue reading.

Libraries are always evolving. Stay ahead. Log In.

research papers 2023

Added To Cart

Related , best reference books 2023, best free reference resources 2023, lj’s top 10 picks for best databases 2022, remarkable resources | best reference books 2022, lj's picks for best free resources 2022, "what is this" design thinking from an lis student.

research papers 2023

Run Your Week: Big Books, Sure Bets & Titles Making News | July 17 2018

Story Image

Materials on Hand | Materials Handling

Story Image

LGBTQ Collection Donated to Vancouver Archives

L J image

Lorem ipsum dolor sit amet, --> Log In

You did not sign in correctly or your account is temporarily disabled

REGISTER FREE to keep reading

If you are already a member, please log in.

Passwords must include at least 8 characters.

Your password must include at least three of these elements: lower case letters, upper case letters, numbers, or special characters.

The email you entered already exists. Please reset your password to gain access to your account.

Create a Password to complete your registration. Get access to:

Uncommon insight and timely information

Thousands of book reviews

Blogs, expert opinion, and thousands of articles

Research reports, data analysis, -->