First Workshop on Formal Languages and Neural Networks (FLaNN)

FLaNN (Formal Languages And Neural Networks) is an on-line discussion group that emerged in 2021 to explore connections between formal language theory, neural networks and natural language processing. During its four years of existence, FLaNN has hosted weekly virtual seminars and conversations and it has blossomed into an active and lively community.

Our goal in this workshop is to bring the FLaNN community together in the real world and to expand participation by welcoming researchers who are not yet involved. The workshop will foster discussion of on-going and possible research on FLaNN-related themes and provide tutorial background for interested graduate students and established researchers. It will also explore possible applications of FLaNN-related work to issues in automated reasoning and other applied areas.

This workshop will have three main components. First, there will be presentations by established researchers, who will present current work in a way that will be interesting and accessible to scholars within and outside of the FLaNN community. Second, there will be poster presentations by workshop attendees (with graduate students being particularly encouraged to submit!). Finally, throughout the workshop, there will be structured opportunities for discussion in order to stimulate new research directions and collaborations.

Call for Submissions

We invite the submission of abstracts for posters that discuss the formal expressivity, computational properties, and learning behavior of neural network models, including large language models (LLMs). Abstracts can describe original or already-published work. The workshop will foster interdisciplinary conversations about the connections between formal language theory and neural networks.

Topics covered by the workshop (along with representative articles) include, but are not limited to, the following:

The formal connection that is present in submitted work does not have to concern formal languages; it could be in other domains such as logic or complexity theory. The neural networks that are involved in the work could be LLMs or any other neural network. However, work that solely focuses on neural networks without a connection to formal computational properties is unlikely to be accepted. Similarly, work that solely discusses formal languages without a connection to neural networks is also unlikely to be accepted.

How to submit

Submissions should be made through OpenReview at the following link: https://openreview.net/group?id=FLaNN/2026/Workshop#tab-recent-activity. If you do not already have an OpenReview account, it is recommended that you create one by January 29, 2026, to ensure that there is enough time for your account to be approved before the submission deadline.

Submission information

Submissions will be non-archival. You may submit work that is unpublished, work that has already been published elsewhere, or work that has been released on preprint servers such as arXiv. Work in progress is allowed, though it must be far enough along that attendees of the workshop would benefit from hearing about it.

Submissions will be made in the form of abstracts, and accepted abstracts will be presented at the workshop as posters. The submissions must meet the following criteria:

  1. The text of the abstract must fit on one page.
  2. You may use no more than one page for visual elements (figures and/or tables).
  3. You may use unlimited space for references.
  4. You should submit the abstract as a PDF.

A LaTeX template for the abstract submission is available at https://www.overleaf.com/latex/templates/flann-workshop-abstract-template/tjbqmgtvyxpr.

Important dates

Organizing committee

Contact information

For any questions not answered above, please email flann@cs.yale.edu.