First Workshop on Formal Languages and Neural Networks (FLaNN)
FLaNN (Formal Languages And Neural Networks) is an on-line discussion group that emerged in 2021 to explore connections between formal language theory, neural networks and natural language processing. During its four years of existence, FLaNN has hosted weekly virtual seminars and conversations and it has blossomed into an active and lively community.
Our goal in this workshop is to bring the FLaNN community together in the real world and to expand participation by welcoming researchers who are not yet involved. The workshop will foster discussion of on-going and possible research on FLaNN-related themes and provide tutorial background for interested graduate students and established researchers. It will also explore possible applications of FLaNN-related work to issues in automated reasoning and other applied areas.
This workshop will have three main components. First, there will be presentations by established researchers, who will present current work in a way that will be interesting and accessible to scholars within and outside of the FLaNN community. Second, there will be poster presentations by workshop attendees (with graduate students being particularly encouraged to submit!). Finally, throughout the workshop, there will be structured opportunities for discussion in order to stimulate new research directions and collaborations.
Call for Submissions
We invite the submission of abstracts for posters that discuss the formal expressivity, computational properties, and learning behavior of neural network models, including large language models (LLMs). Abstracts can describe original or already-published work. The workshop will foster interdisciplinary conversations about the connections between formal language theory and neural networks.
Topics covered by the workshop (along with representative articles) include, but are not limited to, the following:
- Bounds on the formal expressivity of specific neural architectures (Strobl et al., 2025, Sarrof et al., 2024)
- Formalisms and tools that facilitate useful abstractions for thinking about how neural architectures work (Weiss et al., 2021, Lindner et al., 2023)
- Interpretability research that focuses on understanding how formal algorithms or representations are realized inside neural networks (Kirov and Frank, 2012, Geiger et al. 2024")
- Autoformalization and its validation (Weng et al. 2025, Wu et al. 2022)
- Constructions that show how specific neural architectures can implement mechanisms that are of formal interest (Hewitt et al., 2020, Smolensky et al., 2024)
- Empirical evaluations of formal predictions about the expressivity of neural networks (Delétang et al., 2023, Yang et al., 2025)
- Neurosymbolic methods for automated reasoning (Nawaz et al., 2025, Colelough et al., 2024)
- Proposals for new neural architectures or training methods inspired by formal analyses of these systems (Dehghani et al., 2019, Merrill et al., 2024, Butoi et al., 2025)
- Analyses of the learning dynamics of neural network models (Murty et al., 2023, Wang et al., 2025)
The formal connection that is present in submitted work does not have to concern formal languages; it could be in other domains such as logic or complexity theory. The neural networks that are involved in the work could be LLMs or any other neural network. However, work that solely focuses on neural networks without a connection to formal computational properties is unlikely to be accepted. Similarly, work that solely discusses formal languages without a connection to neural networks is also unlikely to be accepted.
How to submit
Submissions should be made through OpenReview at the following link: https://openreview.net/group?id=FLaNN/2026/Workshop#tab-recent-activity. If you do not already have an OpenReview account, it is recommended that you create one by January 29, 2026, to ensure that there is enough time for your account to be approved before the submission deadline.
Submission information
Submissions will be non-archival. You may submit work that is unpublished, work that has already been published elsewhere, or work that has been released on preprint servers such as arXiv. Work in progress is allowed, though it must be far enough along that attendees of the workshop would benefit from hearing about it.
Submissions will be made in the form of abstracts, and accepted abstracts will be presented at the workshop as posters. The submissions must meet the following criteria:
- The text of the abstract must fit on one page.
- You may use no more than one page for visual elements (figures and/or tables).
- You may use unlimited space for references.
- You should submit the abstract as a PDF.
A LaTeX template for the abstract submission is available at https://www.overleaf.com/latex/templates/flann-workshop-abstract-template/tjbqmgtvyxpr.
Important dates
- December 18, 2025: Call for posters released
- January 29, 2026: (Recommended) Create a profile on OpenReview by this date, if you do not already have one
- February 12, 2026: Abstract submissions due on OpenReview
- March 15, 2026: Notification of decisions on submitted abstracts
- May 11 to May 13, 2026: Workshop
Organizing committee
- Robert Frank
- Lena Strobl
- Dana Angluin
- Timos Antonopoulos
- Arman Cohan
- Tom McCoy
- Ruzica Piskac
- Andy Yang
Contact information
For any questions not answered above, please email flann@cs.yale.edu.