Skip to content

Meeting Agenda

Nebojša Ćirić edited this page Jun 25, 2024 · 15 revisions

2024-06-25

Covering Apple contribution (from George's email). These are the main parts of the wrapper.

Here are previous presentations that involve this wrapper code.

If Kyle joins we can discuss:

  • Dictionary & Rules & ML approach
  • Check if there's a way to attract NLP students to help scale

2024-06-11 (CANCELED - too many OOOs)

Covering Apple contribution (from George's email). These are the main parts of the wrapper.

Here are previous presentations that involve this wrapper code.

If Kyle joins we can discuss:

  • Dictionary & Rules & ML approach
  • Check if there's a way to attract NLP students to help scale

2024-05-28

  • George sent an email about Apple inflection code open-sourcing
  • Further discussion about FST & ML (LSTMs)
  • Potential contributors from academia (no solid news here)

2024-05-14

  • Getting month data from Wikidata (thanks Denny)
  • Serbian rules PR to showcase more complex rules
  • Rule generation using examples
  • Multiple results from API - some words can inflect in many ways depending on context (can be done with FSTs with weights), but higher level logic needs to decide which one to use

2024-04-30

  • Go over PRs
  • Some projects/questions:
    • Expand the lexicon - form1: attr1, attr2; form2: attr1, attr3;...
    • Investigate pulling Wikidata (script)?
    • Use FST model to work with dates in English (CLDR lexicon/dates)
    • Add a more complex example using Pynini (Serbian/Russian?)
    • An interesting quote from the FST book

"In our opinion, finite-state methods still play a central role in speech and language technologies and are not going away any time soon. At Google, the OpenFst and OpenGrm libraries remain absolutely essential for latency-sensitive applications like voice search, automated captions in YouTube, and the Google Assistant. Many Google engineers and linguists working on speech and language processing specialize in WFST algorithms or grammar development.

While we cannot speak to practices elsewhere in the tech industry, Pusateri et al. (2017) reports that the Apple’s Siri assistant uses finite-state grammars—hybridized with a neural network for inverse normalization, i.e., to convert ASR transcripts to a human-readable form. The powerful Kaldi speech recognition toolkit—widely used by academic researchers uses a WFST decoder, implemented with OpenFst.

Other technologies - including modern neural networks — have begun to encroach on the state of the art for speech technologies, and may ultimately render WFSTs obsolete, but such technologies still struggle to compete on latency, particularly for embedded platforms (e.g., mobile devices) lacking the specialized hardware needed to support large neural networks."

2024-04-16

  • Go over PRs
  • Go over next steps, e.g. how to do inflection.

2024-04-02

2024-03-19

  • Denny present Wikidata
  • Review “Issues” and prioritize them

2024-03-07

  • Introduce members
  • Discuss operations, e.g. meeting cadence/duration
  • Discuss goals and non-goals
  • Go over issues
  • Discuss repository structure