06-15, 15:45–16:25 (Europe/London), Salisbury
As we all know (or, at least, as I've been trying to tell everyone,) generators in Python are an extremely powerful API design technique. A generator represents the linear decomposition of a single computation into multiple parts, and such decomposition proves very useful in practice. For example, we can model an infinite computation and only execute the portions we desire. Very similarly, we can simplify APIs that specify when a computation terminates, by modeling these computations as infinite sequences of steps, and allowing the end-user to directly control which steps are peformed. We can even interleave the parts of multiple, distinct computations (though in Python ≥3.6, this is better done with the custom async
and await
syntax and associated protocols.)
A generator-coroutine offers us an alternative formulation for a state machine, but one which represents state and transitions implicitly in the form of (linearised) source text—in order words, a state machine that we can read and understand like any other regular code (and where we have arbitrary control over data-flow.)
But, in practice, the principles which support the use of generators (e.g., as iteration helpers,) often contrast with the code we get when we model with generator-coroutines, and a number of practical issues arise. While these issues may be surmountable (with enough effort and enough contortion,) the question remains: are generator-coroutines really the answer?
Previously on generators (at PyData London 2018): https://youtu.be/m6asOJmfGpY
Creating a backtesting/simulator using generator-coroutines (at PyData NYC 2023): https://www.youtube.com/watch?v=pGGjS6CkDeE
More background:
- Why do I need generators? https://youtu.be/7jePVK4fvHQ
- Why do I need generator coroutines? https://youtu.be/8WSgTFBq984
- Earlier skepticism about generator coroutine approaches. https://youtu.be/trellGpLQEs
No previous knowledge expected