Info

This course examines the use of natural language processing as a set of methods for exploring and reasoning about text as data, focusing especially on the applied side of NLP — using existing NLP methods and libraries in Python in new and creative ways (rather than exploring the core algorithms underlying them; see Info 159/259 for that).

Students will apply and extend existing libraries (including scikit-learn, pytorch, gensim, spacy and huggingface) to textual problems. Topics include text-driven forecasting and prediction (using text for problems involving classification or regression); exploratory data analysis; experimental design; the representation of text, including features derived from linguistic structure (such as named entities, syntax, and coreference) and features derived from low-dimensional representations of words, sentences and documents; exploring textual similarity; information extraction (extracting relations between entities mentioned in text); and the underlying structure and affordances of large language models.

This is an applied course; each class period will be divided between a short lecture and in-class lab work using Jupyter notebooks (roughly 50% each). Students will be programming extensively during class, and will work in groups with other students and the instructors. Students must prepare for each class and submit preparatory materials before class. Attendance is required.

This course is targeted to graduate students across a range of disciplines (including information, English, sociology, public policy, journalism, computer science, law, etc.) who are interested in text as data and can program in Python but may not have formal technical backgrounds.

Office hours

David Bamman:
  • Wednesday, 10am-noon (314 South Hall)

  • Kent Chang:
  • TBD
  • Prerequisites

    Graduate student status; proficient programming in Python (programs of at least 200 lines of code), equivalent to INFO 206A/B.

    Syllabus

    (Subject to change.)

    Week Date Topic Readings Optional
    18/23 Introduction [slides] Nguyen et al. 2020 Ziems et al. 2023
    28/28 Words [slides] NLTK 3; Potts Manshel 2020; Fischer-Baum et al. 2020
    8/30 Finding distinctive terms [slides] Kilgarriff 2001 (up to p. 248); Monroe et al. 2009 (up to 3.3) Jurafsky et al. 2014; Mosteller and Wallace 1964
    39/4 Holiday (Labor Day)
    9/6 Lexical semantics/word embeddings 1 [slides] SLP3 ch. 6 Gensim word2vec tutorial Shechtman 2021; Soni et al. 2021
    49/11 Bias in word embeddings [slides] An et al. 2018 Kozlowski et al. 2019
    9/13 EDA: Topic models [slides] Blei 2012 Klein 2020; Antoniak et al. 2019; Demszky et al. 2019; Grimmer 2010
    59/18 Annotating data [slides] Krippendorff 2018, "Reliability" (bCourses) Vidgen et al. 2021; Voigt et al. 2017
    9/20 Text classification: logistic regression [slides] NLTK 6; Scikit-learn tutorial Zhang et al. 2018; Broadwell et al. 2017
    69/25 Hypothesis testing 1 [slides] NLTK 6 ; Dror et al. 2018; Hovy and Spruit 2016 Field et al. 2021; Blodgett et al. 2020; Denny et al. 2018
    9/27 Hypothesis testing 2: bootstrap; permutation tests [slides] A Gentle Introduction to the Bootstrap Method Antoniak and Mimno 2017
    710/2 Language models: basics, evaluation, sampling [slides] SLP ch. 3 Danescu-Niculescu-Mizil et al. 2013
    10/4 Transformers 1 [slides] SLP ch. 10 Gururangan et al. 2022; Chang et al. 2023
    810/9 Transformers 2 [slides] SLP ch. 11
    10/11 Using contextual embeddings [slides] Smith 2020; Devlin et al. 2018 Lucy et al. 2021
    910/16 Large language models 1 [slides] Prompt Engineering Guide Liu et al. 2021
    10/18 Large language models 2 [slides] Prompt Engineering Guide Jurgens et al. 2023
    10 10/23 EDA: Text clustering [slides] Blog post; Scikit-learn clustering Nelson 2020; Wilkens 2016; Viswanathan et al. 2023
    10/25 WordNet [slides] SLP3 23; NLTK 2 Tenen 2018
    11 10/30 POS tagging [slides] SLP3 ch. 8; Parrish blog post Gimpel et al. 2011
    11/2 Multiword expressions [slides] Manning & Schütze (1999); Sag et al. 2001 Handler et al. 2016; Lau et al. 2013
    1211/6 Named entity recognition [slides] SLP3 ch. 8.3 Erlin et al. 2021; Evans and Wilkens 2018
    11/8 Dependency parsing [slides] SLP3 ch 18 Reeve 2017; Underwood et al. 2018
    1311/13 Coreference resolution [slides] Spacy neural coref Sims and Bamman 2020
    11/15 Information extraction [slides] SLP ch. 21 Keith et al. 2017
    1411/20 Sequence alignment [slides] Needleman–Wunsch; Smith–Waterman So et al. 2019; Wilkerson et al. 2014
    11/22 Holiday (Thanksgiving)
    1511/27 Final project poster session
    11/29 Peer review

    Grading

    10% Participation
    40% Homeworks
    50% Project:
          5% Proposal/literature review
          15% Midterm report
          25% Final report
          5% Presentation

    We will typically have a short homework due before each class (so no late homeworks will be accepted); each homework will be graded as {check+, check, check-, 0}. We will drop your 3 lowest-scoring homeworks when calculating your final grade.

    Project

    Info 256 will be capped by a semester-long project (involving one to three students), involving natural language processing in support of an empirical research question. The project will be comprised of four components:

    • — Project proposal and literature review. Students will propose the research question to be examined, motivate its rationale as an interesting question worth asking, and assess its potential to contribute new knowledge by situating it within related literature in the scientific community. (2 pages; 5 sources)
    • — Midterm report. By the middle of the course, students should present initial experimental results and establish a validation strategy to be performed at the end of experimentation. (4 pages; 10 sources)
    • — Final report. The final report will include a complete description of work undertaken for the project, including data collection, development of methods, experimental details (complete enough for replication), comparison with past work, and a thorough analysis. Projects will be evaluated according to standards for conference publication—including clarity, originality, soundness, substance, evaluation, meaningful comparison, and impact (of ideas, software, and/or datasets). (6 pages, not including references)
    • — Presentation. At the end of the semester, teams will present their work to the class and broader Berkeley community in a poster session in-class on 11/27.
    All reports should use the ACL 2023 style files on Overleaf.

    Policies

    Academic Integrity

    All students will follow the UC Berkeley code of conduct. While the group project is a collaborative effort, all homeworks must be completed independently. All writing must be your own; if you mention the work of others, you must be clear in citing the appropriate source (For additional information on plagiarism, see here.) This holds for source code as well: if you use others' code (e.g., from StackOverflow), you must cite its source. All homeworks and project deliverables are due at time and date of the deadline. We have a zero tolerance policy for cheating and plagiarism; violations will be referred to the Center for Student Conduct and will likely result in failing the class.

    Students with Disabilities

    Our goal is to make class a learning environment accessible to all students. If you need disability-related accommodations and have a Letter of Accommodation from the DSP, have emergency medical information you wish to share with me, or need special arrangements in case the building must be evacuated, please inform me immediately. I'm happy to discuss privately after class or at my office.