Two talks at FITC Web Unleashed 2023 (thanks for the ticket Rangle!) really lit up my brain. The conference opened with Wes Bos’ “AI and Coding: Hype or Reality?” and on day 3, Jen Looper’s “The Ethics Of Generative AI: A Humanist’s Guide” helped me view all the hype and reality from a new perspective.
The Talks
Wes Bos gave us a whirlwind tour of all the ways he uses LLMs to help him write code, tests, create dummy data, convert data between formats, and make his podcast Syntax.fm searchable and accessible to more people. His excitement was contagious as he demoed LLM-driven autocompletions, code generation, audio transcriptions, and language embeddings, reminding us that this stuff doesn’t always work in the real world.
Jen Looper’s talk invited us to be more skeptical of AI and think about what it can do for us (or to us) as humans, not just software developers. Carbon emissions and the mental health toll on people who filter data to ensure it is safe for training LLMs are topics I haven’t given much thought to before, but what stood out to me was the education perspective. LLMs have already changed how students get their work done in school. What will those students be like when they graduate, join the workforce, and become my teammates? And what will our work be like by then?
How are different groups adopting LLMs?
I have been slow to adopt LLM-based tools in my day-to-day work. I can ask ChatGPT to write functions or tests that use common formulas (eg. find the distance between 2 GPS coordinates) or run conversions on plain text (eg. turn this CSV into JSON). However, when I have to do something more complex, I turn back to my usual ways of working. I go to Google and start looking for articles, tutorials, videos, and Stack Overflow answers about whatever I am curious about. When I want a consistent voice on a larger topic, I go to the library to get a book. I check the dates books and blog posts were written, separate marketing from useful content, and compare different sources of information to find the one that fits my context.
Meanwhile, students in K-12 are turning in assignments completely generated by prompting LLMs, and citing ChatGTP as a primary source of information. School boards and educators are playing catch up, trying out different policies and enforcement strategies. Should they ban AI completely? Can they create a new curriculum that pairs LLM usage with critical thinking skills? On the other hand, Duolingo and Khan Academy have both launched LLM-based features that work like tutoring or other 1-on-1 teaching approaches. As tools that use LLMs to help students explore certain topics become more common, enthusiastic learners will have easier access to help than ever before.
What do these differences mean for the future?
I think our morals about using these tools will be influenced by students as well. There was uproar in the dev community and a class action lawsuit when we learned GitHub Co-Pilot had been using our git repos as training data without our explicit permission. Today, visual artists are trying to stop and recover damages from their work being used to train generative AI models. As a person who has created text, audio, video, and software content, I can relate to that uncomfortable feeling of seeing one’s work commercialized without permission in ways the creator never imagined. It makes me a bit uncomfortable to use LLMs for profit without knowing whose effort I’m building on top of, and what their feelings are about it. However, this attitude could fade away. What if this backlash is just part of the transition to AI? Will a student who heavily relies on LLMs care if their LLM-generated homework becomes part of future LLMs’ training? I speculate they won’t be bothered as much.
At some point, our generational differences in how we think about and use LLMs will collide in the workplace, and we’ll have to figure out the right compromises in order to be able to work together. I’m not sure what software development will look like by then. We already have prototypes of software creating UIs and generating entire codebases, but I expect people will still have to come together to solve new problems and share knowledge. I think knowing whether a given task is a good fit for LLMs will be a critical skill, along with being able to refine input or output. Creativity - being able to create something an LLM can not even hallucinate - may become even more valuable.
How am I going to prepare?
In the short term, I should get more familiar with LLMs. I could turn to them for help more often and talk about the output with teammates. As more models become available, I may discover some line up better with my own morals by having lower mental health impacts or environmental footprints, or trained on truly open data sets. When I talk to students, I can continue encouraging them to think for themselves, and use LLMs to provide inspiration or coaching over having it do the work for them. Picking the right tool for the job is important, and sometimes that will be an LLM.