The LIVE Primer

LIVE is a workshop for sharing work on live programming. The LIVE Primer is a collection of introductory resources for researchers in the LIVE community. We hope it will be especially useful to researchers new to these topics, including people coming from outside of academia.

Status: Under construction. Has some stuff in it, and a bunch of holes.

Please email with questions / feedback / contributions. Thanks!

Live programming

Traditionally, programming is opaque. As a computer runs a program, it navigates through control structures, performs operations, and transforms data. Programmers need to arrange this behavior, but the behavior is typically invisible – no feedback is provided by the computer to the programmer to tell them what their program will do as they write it. Live programming means moving away from this status quo, creating programming tools which provide immediate feedback on the dynamic behavior of a program even while programming . The use of live in this context dates to , although systems embodying various levels of liveness date back as far as computing itself: see Sketchpad , Smalltalk, and VisiProg .

Live systems can take many different forms.

Starting in section 2, we will look at these approaches in more detail. For now, we’ll leave you with a few general preliminaries.

Terminological points

A few concepts come up in this primer that you may not be familiar with. Here are some quick & dirty explanations.

Static vs dynamic: The phrase dynamic behavior in the definition given earlier means the behavior of a running program. Some other kinds of feedback, like type checking and linting, don’t involve actually running a program – they just analyze its source code – so that kind of feedback is called static. Live programming tends to refer to systems that provide dynamic feedback, from running programs, though the boundaries of these categories are fuzzy. (Colloquially, an editor that gives you type-driven autocomplete suggestions certainly feels a bit live, doesn’t it?)

The boundaries of programming, and programs: It’s surprisingly hard to define what programming is, or what programs are. Without getting into all those weeds, we should clarify one part of our stance: Programs aren’t just textual code, and programming isn’t just editing textual code in a code editor! We agree with that programming is the human activity of describing a process run by some computer – that includes building flow-charts, acting out steps for the computer to copy, and maybe even designing a chart with Excel. In all these cases, the program is whatever the computer is saving and holding onto that can be run again in the future. This doesn’t need to look like traditional computer code at all.

Further reading

If you’re interested in liveness in general, we recommend:

Approach: Code

In today’s world, programming usually means editing textual code. Some approaches to live programming involve moving away from text (see following sections for that), but plenty of approaches keep text and add live feedback on top of it.

To take a broad perspective, we can follow a scheme from and divide approaches into three categories (with granularity of feedback increasing as we go):

Liveness outside of code

The tree example from Inventing on Principle , an example of Liveness outside of code

Some systems provide quick feedback about the final output of a program, without revealing information on program internals that led to that output. Examples include live/hot reloading systems popular for application development (like Hot Reloading with React) and split-screen editors popular for generative art (the P5.js web editor, the tree & video-game examples from Inventing on Principle shown above , and TouchDevelop . While the fast feedback these systems provide is certainly an improvement on slow compile-run loops (and often requires sophisticated technical work), this feedback is the most coarsely-grained liveness possible. The following categories of systems provide visibility not just into the output of programs, but into their internal operation.

Liveness between code cells

Cells running JavaScript code in Natto, an example of Liveness between code cells

Rather than provide only top-level output on a single-file program, other editors provide special interfaces where code is broken up into cells. These editors can then provide visibility into values flowing between cells. The original instance of this pattern is the spreadsheet. More recently, computational notebooks like Mathematica, Jupyter, and Observable have extended this model to support more sophisticated computations. Clerk lets users store and edit notebooks as text files, but arguably belongs in this category since feedback isn’t available into the internals of code, just top-level definitions. Even further afield, Natto (shown above) reshapes the conventional notebook structure into nodes and wires on a two-dimensional canvas.

Liveness within code

An interface from Learnable Programming that shows detailed line-by-line feedback on code, an example of Liveness within code

The finest-grain feedback comes from augmenting textual-code editors with in-context displays showing run-time behavior. Rauch et al.’s Babylonian-style Programming surveys the state of the art in this category as of 2012. They evaluate eight existing editors:

Since 2012, Apple introduced Xcode Playgrounds and a line of work has developed based on Projection Boxes , , .

Related work from LIVE:

Approach: Visual programming

Visual programming is programming in a medium other than text. Turns out, most things in the world aren’t text, so this is a pretty broad space!

Is visual programming live programming? Not automatically, no! Live programming is about programming with live feedback – programming alongside a dynamic system, not just a static representation. A program represented as a diagram can be just as static as a program represented as text. Just because you’re dragging around nodes or snapping blocks together does not mean that your program is any more live than a textual program. (For more on a related argument, see how distinguish liveness from richness.)

But visual programming can offer openings for liveness, and work harmoniously with it. By moving away from text editors to novel graphical environments, visual programming systems open up space for fine-grained visual feedback to be threaded into an interface. (See, for instance, node viewers in TouchDesigner, or inline displays of live values in Lambdu expressions.) For this reason, visual programming systems are a popular approach for live-programming systems.

A few forms of visual programming come up all the time: nodes and wires, flowcharts, blocks, and text-like structured editors. These are the classics.

Classic: Nodes and wires

Boxes wired together, with wires typically representing the flow of data. This form factor has found an remarkable amount of use in practical contexts like multimedia production (Max, Pure Data, TouchDesigner), game programming (Unreal Engine Blueprints), CAD (Grasshopper), and automation (Yahoo Pipes). It’s also a common format for research projects. But watch out: a free-form canvas where all connections are visible can become unmanageable spaghetti.

Related work from LIVE:

Classic: Flowcharts

If your wires represent control flow in an imperative program rather than functional data flow, you are working with a good old-fashioned flowchart. Although these date back to the ’40s, they have largely fallen out of use. (Why?)

Related work from LIVE:

Classic: Blocks

A more structured format than free-floating boxes is blocks that snap together, made famous by Scratch. These have mostly found a niche in educational contexts. (Why?)

Classic: Text-like structured editors

Some editors exist in a murky space on the spectrum between visual and text, varying in how much they emulate traditional textual code and traditional keyboard interactions. Examples here include Fructure (shown above), Lamdu, and Hazel. Note how similar these are to the block-based editors mentioned above! They’re both structure editors for nested structures. The main difference is perhaps that block-based editors prioritize manipulation by mouse while these text-like editors prioritize keyboard use.

Critiquing the classics

The classic visual programming approaches described above have met with a great deal of criticism. For instance, Fred Brooks spends a section of his famous No Silver Bullet essay talking smack about them:

A favorite subject for PH.D. dissertations in software engineering is graphical, or visual, programming, the application of computer graphics to software design… Nothing even convincing, much less exciting, has yet emerged from such efforts. I am persuaded that nothing will.

His arguments, in summary: 1. Flowcharts are bad. 2. Screens are too small. 3. Software itself is invisible and unvisualizable. Read the essay for the details, and decide for yourself. (Incidentally, point 2 is elaborated by the so-called Deutsch limit: The problem with visual programming is that you can’t have more than 50 visual primitives on the screen at the same time.)

We can critique these classics from another angle. As we mentioned at the top of this section, visual programming is not inherently live. In their barest, purest versions, these classic forms of visual programming really just re-format static conventional code into static visual structures. The spirit of liveness is that we should see what our programs are doing, but by themselves, classic visual programming just show the programs, not the doing. Bret Victor expresses this well :

I don’t fault Fred Brooks for this view – the visual programming that he’s thinking of indeed has little to offer. But that’s because it visualizes the wrong thing.

Traditional visual environments visualize the code. They visualize static structure. But that’s not what we need to understand. We need to understand what the code is doing.

Visualize data, not code. Dynamic behavior, not static structure.

Overlapping this, argue that visual languages don’t really let us think visually, because each problem domain brings its own set of visuals we’d want to use: [Visual languages] offer only a fixed set of constructs, though visual ones–meaning a visual language fails to address the problem-specific nature of geometric thought.

If we buy these critiques (you don’t have to buy them!), we’re left with a few options. One is to abandon visual programming. Another is to augment it with liveness. (As we mentioned earlier, there are reasons to think visual programming may harmonize well with live displays of data!) The last option is to move to a less-classic form of visual programming, where data is foregrounded and you program by interacting directly with the data. This qualifies as programming by demonstration, which we discuss in a later section.

More galleries

See Ivan Reese’s Visual Programming Codex for a vast array of implementations, links to further galleries, and more.

For a fascinating look at the state of the art mid-’90s (?), you can check out the comp.lang.visual FAQ.

Approach: Programming by Demonstration / Example

Traditionally, programmers build programs by specifying processes symbolically – given x, perform operations f and g on it. But in a live-programming system, the abstract x might take on a specific concrete value, and we might be able to use this to build a program in a less abstract way.

For instance, suppose at some point in our program we have the text George Clinton, and we want to extract initial letters. Perhaps we could tell the programming system In this case, I want the output GC, and the system could infer a program from that. This would be programming by example (PbE). Or perhaps we could use an interface to select the characters G and C, and the system could infer a program from our actions. This would be programming by demonstration (PbD).

Here, we’re adopting a definitional distinction between PbE and PbD: programming by example asks the computer to infer general intent from nothing more than one or more input-output pairs, while programming by demonstration lets the computer watch the user demonstrating the construction of the output from the input, which might provide more of a hint as to how to do it in general. We’re not the only ones to use this distinction , but some other folk use PbE and PbD interchangeably and in any case the line between them can be fuzzy.

The core challenge of PbE/PbD systems is inferring generalizable intent from specific examples or actions. As articulated in the canonical volume Watch What I Do :

The main challenge confronting Programming by Demonstration is how to infer the user’s intent. In order to convert a recorded action into a program to perform that action, the system needs to determine the user’s intent in performing the action. When the program is executed in the future, the context will be somewhat different, and it will be necessary to perform the action that is the equivalent of the recorded action in this new context.

Inference: How magic is it?

One of the most important ways PbD systems vary is how much magic they use when turning a user’s actions into a generalized program.

Conventional interfaces are designed to let the user perform concrete, immediate actions. A user might copy-paste text from one place to another in a text editor, or drag to resize a shape in a vector-graphics editor. In neither case does the user explicitly or unambiguously express a deeper, generalizable intent through these actions – say, the intent to have a reference to a heading bear the heading’s title, or the intent to have a certain shape always be contained within a larger shape. For a PbD system to work from demonstrations made with a conventional interface, it needs to somehow infer deeper intents that aren’t explicitly expressed.

An example of a system that does this is Eager (link). Eager watches the user perform actions on their computer. When it notices a repetitive series of actions it thinks it can automate, like copying subject lines one-by-one out of e-mails into a separate note, it appears and offers to continue the pattern for the user.

diagram of Eager with screenshots
Eager

Inferring patterns and intents from informally-provided data starts to sound a lot like artificial intelligence. In keeping with techniques of its time (1991), Eager relies on simple heuristics and symbolic pattern-matching to extrapolate the user’s actions. You can imagine more recent projects along these lines might use more sophisticated machine-learning and AI techniques.

Thinking of PbD as an artificial-intelligence application is a well-established perspective. Take, for instance, Tessa Lau’s paper Why Programming by Demonstration Systems Fail: Lessons Learned for Usable AI , which largely presupposes this point of view (PBD is a natural match for artificial intelligence, particularly machine learning. By observing the actions taken by the user (training examples), the system can create a program (learned model) that is able to automate the same task in the future (predict future behavior).) As we all know from our experiences talking with AI agents, machine-learning-driven systems can fail in frustrating and opaque ways. Lau’s paper takes three of her projects (SMARTedit , Sheepdog , and CoScripter ) and extracts lessons from them which she thinks can ameliorate some of the weakness of the AI-driven approach. These lessons are all worth looking at. We highlight one in particular: Encourage trust by presenting a model users can understand. A system that makes inferences should display the results of these inferences to the user, so they can find problems and fix them. For more on this point, we suggest Heer’s paper on Agency plus automation , which advocates for shared representations of possible actions, enabling computational reasoning about people’s tasks alongside interfaces with which people can review, select, revise, or dismiss algorithmic suggestions.

Further from the promise and peril of high-inference PbD lies an alternative approach. We’ve seen that working from demonstrations made on conventional interfaces requires AI magic. But perhaps we could instead design new interfaces which give the user palettes of interactions which let them express unambiguous intent from the get-go.

The earliest programming-by-demonstration system, Pygmalion, exemplifies this approach (link). In Pygmalion, the user acts out steps of a program by manipulating icons on the screen that represent values and operators. (Incidentally, Pygmalion coined the term icons!) The system keeps a record of these actions and allows them to be replayed on different inputs. For instance, a factorial function can be defined by acting out what it does on 5, following the program’s subsequent execution down to the base case, and then acting out how the base case should be handled.

screenshot of Pygmalion
Pygmalion

Pygmalion embraces the fundamental vision of programming by demonstration: that a user should be able to build a generalizable programs by acting out what the program does on particular cases. But it does this without any kind of inference. The system does not need to infer the user’s intent, because the interactions it makes available to the user are designed to convey unambiguous intent from the start.

The next few sections will explore some aspects of designing such unambiguous intent PbD systems.

Program representation & control flow

Although Pygmalion works by building a record of operations that define a procedure, it doesn’t actually show this record to the user. This seems like an important missing piece! An explicit representation of a program is the foundation to a user being able to tell whether the program does what they want it to do, and to making later edits to correct mistakes or add new functionality.

This representation doesn’t need to be traditional code. A recent project, Subsequently , lets a user manipulate data structures in a Pygmalion-like fashion, but each action creates a new panel to represent it. The actions in a program end up spread out into a flowchart that can be read like a comic book. Control flow, like conditional branches and loops, can also be edited using this flowchart. This aspect of Subsequently moves away from PbD per se. Drawing arrows between flowchart cells is a much more conventional, symbolic way to edit a program. But perhaps this hybrid structure takes good advantage of the strengths of these two styles of programming. PbD systems tend to struggle with control flow – it is difficult for a user to demonstrate a conditional or a loop. (A speculative argument why: Control flow exists on a meta level. Conditionals & loops must refer to actions that they control, so these actions must be reified into symbols and manipulated as such.)

screenshot from Subsequently presentation
Subsequently

This PbD actions on data + symbolic control flow recipe is also followed by the Drawing Dynamic Visualizations demo . The main canvas in DDV operates much like Pygmalion – the user applies successive actions in the same space. But the actions they perform leave a trace in a log of steps on the left. Structures like loops can then be added to this outline in, manipulating it symbolically. While Subsequently’s model for control flow is goto-like flowchart loops, DDV uses structured-programming constructs like for-each loops, which is why a nested outline makes sense for DDV. (We encourage readers interested in Drawing Dynamic Visualizations to read Victor’s addendum to the project , which includes notes on how he ensured users could precisely and unambiguously express intent.)

screenshot from Drawing Dynamic Visualizations
Drawing Dynamic Visualizations

Droste’s Lair begins by following Subsequently’s model – flowcharts constructed with direct manipulations on comic-book panels – but uses recursive procedure-calling for control flow instead of flowchart loops.

screenshot from Droste’s Lair
Droste’s Lair

Disambiguating intent

Once higher-level concerns like control flow are moved to the world of symbolic manipulation, the world of demonstration comprises smaller operations. But even here, there are challenges in disambiguation.

Programming by example

Bidirectional programming

Challenges

Some general TODOs:

Media

Work from previous LIVEs:

Applications

TODO

Related concepts

We’ll close with an assortment of concepts that are helpful to know for working on and talking about live-programming systems.

End-user programming

End-user programming is programming done to accomplish an end in a person’s own work or life. For example:

That’s a super broad range of contexts. Despite the vast range of situations where end users want to program, most programming systems are still oriented around professional software engineering – people building software products to be used by different people far away.

Live programming and end-user programming are separate concepts – live programming systems can target professional programming, and end-user programmers can use non-live programming systems. But there’s a long history of live programming systems built for end-user programmers, with spreadsheets as the classic example. Some reasons we speculate live programming may be especially well-suited to end-user programming:

  1. End-user programmers may be less practiced than professionals at “playing computer in their head’, and may benefit more from the concrete feedback live programming can provide.
  2. End-user programmers may have less time and ego invested in the traditional software engineering stack: text editors, Git, etc. While live-programming tools need to overcome significant inertia to attract professionals, the choice between a traditional programming tool and a novel one may be a toss-up to an end-user programmer.
  3. Professional programmers generally write code that will run many times on many unknown inputs. To use live-programming techniques, they need to select example inputs to produce example dynamic behavior. They may reasonably be worried that their program may not cover other inputs correctly. End-user programmers are sometimes in this situation, but often they write code that will only run once, while they are watching. For instance, picture a scientist analyzing a data set in a notebook. If the live results they see look good, that may be all they need. Similarly, a live-coder making music usually doesn’t worry about how their program would respond to different inputs.

Further reading

Cognitive Dimensions of Notation

TODO

The gulfs of execution and evaluation

A fundamental concept from the field of human-computer interaction (HCI). Read about them in , , or this primer once we get around to it.

screenshot from Subsequently presentation
(Figure 3.1 from )
screenshot from Subsequently presentation
(Figure 3.2 from )

Getting hold of references

It can be hard to access academic work in a financially sustainable way without a university affiliation. Searching for a paper title will often get you to a free PDF. When that fails, other tools may succeed.

References

  1. Andersen, L., Ballantyne, M., & Felleisen, M. (2020). Adding interactive visual syntax to textual code. Proceedings of the ACM on Programming Languages, 4(OOPSLA), 1–28. https://doi.org/10.1145/3428290
  2. Burckhardt, S., Fahndrich, M., de Halleux, P., McDirmid, S., Moskal, M., Tillmann, N., & Kato, J. (2013). It’s alive! continuous feedback in UI programming. ACM SIGPLAN Notices, 48(6), 95–104. https://doi.org/10.1145/2499370.2462170
  3. Cypher, A. (1991). EAGER. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems Reaching through Technology - CHI ’91, 33–39. https://doi.org/10.1145/108844.108850
  4. Edwards, J. (2004). Example centric programming. ACM SIGPLAN Notices, 39(12), 84–91. https://doi.org/10.1145/1052883.1052894
  5. Evans, E., & Horowitz, J. (2024). An invitation into Droste’s Lair. https://vezwork.github.io/drostes-lair-post/. https://vezwork.github.io/drostes-lair-post/
  6. Goethals, M. (2024). Subsequently: Telling stories with pictures makes programs. In Workshop on Live Programming (LIVE). https://www.youtube.com/watch?v=4rLGHBio5UI
  7. Granger, C. (2022). Light Table. http://lighttable.com/
  8. Hancock, C. M. (2003). Real-Time Programming and the Big Ideas of Computational Literacy [Phdthesis]. Massachusetts Institute of Technology.
  9. Heer, J. (2019). Agency plus automation: Designing artificial intelligence into interactive systems. Proceedings of the National Academy of Sciences, 116(6), 1844–1850. https://doi.org/10.1073/pnas.1807184115
  10. Henderson, P., & Weiser, M. (1985). Continuous execution: the VisiProg environment. Proceedings of the 8th International Conference on Software Engineering, 68–74.
  11. Horowitz, J., & Heer, J. (2023). Live, Rich, and Composable: Qualities for Programming Beyond Static Text. Plateau Workshop. https://doi.org/10.1184/R1/22277338.V1
  12. Horowitz, J. (2024). Technical Dimensions of Feedback in Live Programming Systems. In Workshop on Live Programming (LIVE). https://joshuahhh.com/dims-of-feedback/
  13. Huang, R. (Lisa), Ferdowsi, K., Selvaraj, A., Soosai Raj, A. G., & Lerner, S. (2022). Investigating the Impact of Using a Live Programming Environment in a CS1 Course. Proceedings of the 53rd ACM Technical Symposium on Computer Science Education, 495–501. https://doi.org/10.1145/3478431.3499305
  14. Hutchins, E. L., Hollan, J. D., & Norman, D. A. (1985). Direct Manipulation Interfaces. Human–Computer Interaction, 1(4), 311–338. https://doi.org/10.1207/s15327051hci0104_2
  15. Imai, T., Masuhara, H., & Aotani, T. (2015). Shiranui: a live programming with support for unit testing. Companion Proceedings of the 2015 ACM SIGPLAN International Conference on Systems, Programming, Languages and Applications: Software for Humanity, 36–37. https://doi.org/10.1145/2814189.2817268
  16. Jakubovic, J., Edwards, J., & Petricek, T. (2023). Technical Dimensions of Programming Systems. The Art, Science, and Engineering of Programming, 7(3). https://doi.org/10.22152/programming-journal.org/2023/7/13
  17. Kaliski, S., Wiggins, A., & Lindenbaum, J. (2019). End-user Programming [Techreport]. Ink & Switch. https://www.inkandswitch.com/end-user-programming/
  18. Kasibatla, S., & Warth, A. (2017). Seymour: Live Programming for the Classroom. https://harc.github.io/seymour-live2017/
  19. Ko, A. J., Myers, B. A., & Aung, H. H. (2004). Six Learning Barriers in End-User Programming Systems. 2004 IEEE Symposium on Visual Languages - Human Centric Computing. 2004 IEEE Symposium on Visual Languages - Human Centric Computing. https://doi.org/10.1109/vlhcc.2004.47
  20. Lau, T., Wolfman, S. A., Domingos, P., & Weld, D. S. (2003). Programming by Demonstration Using Version Space Algebra. Machine Learning, 53(1), 111–156. https://doi.org/10.1023/A:1025671410623
  21. Lau, T., Bergman, L., Castelli, V., & Oblinger, D. (2004). Sheepdog. Proceedings of the 9th International Conference on Intelligent User Interfaces, 109–116. https://doi.org/10.1145/964442.964464
  22. Lau, T. (2009). Why Programming-By-Demonstration Systems Fail: Lessons Learned for Usable AI. AI Mag., 30(4), 65–67. https://doi.org/10.1609/aimag.v30i4.2262
  23. Lerner, S. (2020). Focused Live Programming with Loop Seeds. Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology, 607–613. https://doi.org/10.1145/3379337.3415834
  24. Lerner, S. (2020). Projection Boxes: On-the-fly Reconfigurable Visualization for Live Programming. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, 1–7. https://doi.org/10.1145/3313831.3376494
  25. Litt, G., Horowitz, J., van Hardenberg, P., & Matthews, T. (2025). Malleable Software: Restoring User Agency in a World of Locked-Down Apps [Techreport]. Ink & Switch. https://www.inkandswitch.com/essay/malleable-software/
  26. Little, G., Lau, T. A., Cypher, A., Lin, J., Haber, E. M., & Kandogan, E. (2007). Koala. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 943–946. https://doi.org/10.1145/1240624.1240767
  27. NORMAN, D. A. (1986). Cognitive Engineering. In User Centered System Design (pp. 31–62). CRC Press. https://doi.org/10.1201/b15703-3
  28. Nardi, B. A. (1993). A small matter of programming. MIT Press.
  29. Rauch, D., Rein, P., Ramson, S., Lincke, J., & Hirschfeld, R. (2019). Babylonian-style Programming: Design and Implementation of an Integration of Live Examples into General-purpose Source Code. The Art, Science, and Engineering of Programming, 3(3). https://doi.org/10.22152/programming-journal.org/2019/3/9
  30. Rein, P., Ramson, S., Lincke, J., Hirschfeld, R., & Pape, T. (2018). Exploratory and Live, Programming and Coding. The Art, Science, and Engineering of Programming, 3(1). https://doi.org/10.22152/programming-journal.org/2019/3/1
  31. Smith, D. C. (1975). PYGMALION: A Creative Programming Environment. Defense Technical Information Center. https://doi.org/10.21236/ada016811
  32. van der Storm, T., & Hermans, F. (2016). Live Literals. https://homepages.cwi.nl/~storm/livelit/livelit.html
  33. Sutherland, I. E. (1964). Sketchpad a Man-Machine Graphical Communication System. SIMULATION, 2(5), R-3-R-20. https://doi.org/10.1177/003754976400200514
  34. Tanimoto, S. L. (1990). VIVA: A visual language for image processing. Journal of Visual Languages and Computing, 1(2), 127–139.
  35. Victor, B. (2012). Inventing on Principle. https://vimeo.com/36579366
  36. Victor, B. (2012). Learnable Programming. http://worrydream.com/LearnableProgramming/
  37. Victor, B. (2013). Additional Notes on “Drawing Dynamic Visualizations.” https://worrydream.com/DrawingDynamicVisualizationsTalkAddendum/
  38. Victor, B. (2013). Drawing Dynamic Visualizations. https://vimeo.com/66085662