Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update docs for 0.14.2 #8

Closed
Technologicat opened this issue Aug 8, 2019 · 18 comments
Closed

Update docs for 0.14.2 #8

Technologicat opened this issue Aug 8, 2019 · 18 comments
Assignees
Labels
documentation Non-executable English, for humans enhancement New feature or request
Milestone

Comments

@Technologicat
Copy link
Owner

Technologicat commented Aug 8, 2019

Document the new features in README. Mention bug fixes in changelog. See recent commits for details.

This part done. See below for any remaining points, especially the megapost and readings.

@Technologicat Technologicat added the enhancement New feature or request label Aug 8, 2019
@Technologicat Technologicat added this to the 0.14.2 milestone Aug 8, 2019
@Technologicat Technologicat self-assigned this Aug 8, 2019
@Technologicat
Copy link
Owner Author

Technologicat commented Aug 8, 2019

Done.


Other things to mention:

  • dyn is essentially SRFI-39, using the MzScheme approach in the presence of multiple threads. Done.
  • Role of unpythonic vs. other functional libraries: language extensions. Hopefully the beginning of the README now makes it clear.
  • syntax.autoref: this is essentially JavaScript's with construct. It was removed for security reasons. Explain in the docs that it's very important that the autoref'd object comes from a trusted source, since it can hijack any name lookups. Done.
  • Pampy is a nice pure-Python pattern matching library. Done.
  • setescape/escape act like Emacs Lisp's catch/throw. Done.
    • catch/throw exist also in some earlier Lisps, as well as in Common Lisp. But according to Seibel, in CL it's more idiomatic to use the lexically scoped variant BLOCK/RETURN-FROM. Done.

@Technologicat
Copy link
Owner Author

Technologicat commented Aug 8, 2019

Done.


Small issues to fix:

  • Be explicit: set the dynvar curry_context means using with dyn.let(curry_context=...): Done.
  • ...to use with an iterable of functions. Done.

@Technologicat
Copy link
Owner Author

Technologicat commented Aug 14, 2019

A section on types has been added to doc/design-notes.md.


Maybe add a note about type systems somewhere in the README. This three-part discussion on LtU was particularly interesting.

  • "Dynamic types" held by values are technically tags.
  • Type checking can be seen as another stage of execution that runs at compilation time. In a dynamically typed language, this can be implemented by manually delaying execution until type tags have been checked - lambda, the ultimate staging annotation. Witness statically typed Scheme using manually checked tags, and then automating that with macros. (Kevin Millikin)
  • Dynamically typed code always contains informal/latent, static type information - that's how we reason about it as programmers. There are rules to determine which operations are legal on a value, even if these rules are informal and enforced only manually. (Anton van Straaten, paraphrased)
  • The view of untyped languages as unityped, argued by Robert Harper, using a single Univ type that contains all values, is simply an embedding of untyped code into a typed environment. It does not (even attempt to) encode the latent type information.
    • Sam Tobin-Hochstadt, one of the Racket developers, argues taking that view is missing the point, if our goal is to understand how programmers reason when they write in dynamically typed languages. It is useful as a type-theoretical justification for dynamically typed languages, nothing more.

Taking this into a Python context, if explicit is better than implicit (ZoP §2), why not make at least some of this latent information, that must be there anyway, machine-checkable? Hence type annotations (PEP 3107, 484, 526) and mypy.

Unpythonic itself will likely remain untyped indefinitely, since I don't want to enter that particular marshland with things like curry and with continuations. It may be possible to gradually type some carefully selected parts - but that's currently not on the roadmap.

@Technologicat
Copy link
Owner Author

Technologicat commented Aug 14, 2019

Also this information is now in doc/design-notes.md.


More on typing:

@Technologicat
Copy link
Owner Author

Technologicat commented Aug 15, 2019

Merged with the megapost below.


  • Look at SRFI-45 promises. Maybe say something about them and their relation to MacroPy promises.
  • Now this remark sounds interesting. Retroactively changing an object's type in Python, like in CLOS? Definitely need to try this at some point.
    • Michael Hudson got there first (2002, 2004): ActiveState Python recipe 160164: Automatically upgrade class instances when you redefine the class. Triggered on module reloads, too. To decide what class to upgrade, it looks for a class of the same name in the scope that defined the new class. Then there's an instance tracker that keeps weakrefs to the existing instances of the old class.
      • Hmm, couldn't we just gc.get_objects() and filter for what we want to find the instances to be updated? (Provided we still had a reference to the old class object.)

@Technologicat
Copy link
Owner Author

Technologicat commented Sep 4, 2019

Also this is now in the section on types in doc/design-notes.md.


  • In physics, units as used for dimension analysis are essentially a form of static typing.

@Technologicat
Copy link
Owner Author

Technologicat commented Sep 13, 2019

This is now mentioned in doc/design-notes.md.


  • Clojure has (trampoline ...), which works pretty much exactly like our TCO setup.

    The return jump(...) solution seems to be essentially the same there (the syntax is #(...)), but in Clojure, the trampoline must be explicitly enabled at the call site, instead of baking it into the function definition, as our decorator does.

    Clojure's trampoline system is thus more explicit and simple than ours (the trampoline doesn't need to detect and strip the tail-call target's trampoline, if it has one - because with Clojure's solution, it never does), at some cost to convenience at each use site.

@Technologicat
Copy link
Owner Author

Technologicat commented Sep 18, 2019

We now provide our own flavor of Common Lisp style conditions, see unpythonic.conditions. It was inspired by python-cl-conditions.


  • Definitely unpythonic: python-cl-conditions: Common Lisp's conditions system for Python.

    Hasn't been maintained for a few years, and the banner says tests fail to build, so should probably take it for a spin first, and if it turns out to need changes, consider whether to fix it there or cook up something similar here.

  • It's essentially a callback system, but with callbacks implemented at and advertised by the inner level; with the outer level making the choice of which one to apply when a given condition occurs.

  • Outline of how it works:

    • Two thread-local stacks, _restarts and _handlers.
    • The with restarts form (used at the implementing, low-level end) publishes the named restarts for the dynamic extent of the block.
      • This is essentially CL's RESTART-CASE.
      • The code within the block may use signal to signal a condition, without unwinding the call stack just yet.
        • Condition types may derive from Exception. However, condition instances are not raised, but instead passed as an argument to signal, because we don't want to unwind the call stack when we signal.
        • signal looks for a handler for the given condition type and calls it normally. If there are several, the most recently bound (leftmost on the stack) handler wins.
      • IMPORTANT: The value returned by with restarts (named call in the example) automates the receiving end of signal by installing an exception handler for InvokeRestart (see below). So instead of just signal(MyCondition, ...), actually use call(signal, MyCondition, ...).
        • Maybe this could be simplified: the top-level signal function could be internal since it's not intended to be used directly, and the return value of with restarts (the only context where signal is invoked) could be made to be used like signal(MyCondition, ...). Do we lose anything important with that strategy?
    • The with handle form (used at the client, high-level end) connects handlers to condition types.
      • This is essentially CL's HANDLER-BIND.
      • To choose the error-recovery policy, a handler may invoke_restart one of the advertised names. This is CL's INVOKE-RESTART.
      • Doing that raises an InvokeRestart exception, which causes the call stack to unwind.
        • The exception is caught by the call handler of the with restarts block from inside which the signal was sent.
        • The result of handling the signal (i.e. the return value of the restart that was chosen by the client code) becomes the return value of the call.
        • The Python implementation seems to be missing the part of the spec where a handler can cancel (by returning normally), which should make the system delegate to an earlier-bound handler for the same condition type.
  • Did some prototyping of this for unpythonic, mainly to confirm my understanding.

@Technologicat
Copy link
Owner Author

Technologicat commented Sep 24, 2019

  • Oddly for a library, we're missing an executable tagline. Write a short demo for the killer features, place at top of README. Now tracked in Add a short demo to main README #42. Done.
  • Upload the changelog-in-progress here. Done.

@Technologicat
Copy link
Owner Author

Technologicat commented Sep 26, 2019

This has now been incorporated to the section on dyn in doc/features.md.


  • An explanation of special variables in CL can be found in Peter Seibel's book.

    See especially footnote 10 in that chapter for a definition of terms. Likewise, variables in our dyn have indefinite scope (because dyn is implemented as a module-level global, accessible from anywhere), but dynamic extent.

    So what we have in dyn is almost exactly like CL's special variables, except we're missing convenience features such as setf and a smart let that auto-detects whether a variable is lexical or dynamic (if the name being bound is already in scope).

@Technologicat
Copy link
Owner Author

Technologicat commented Sep 30, 2019

This text has been incorporated into CHANGELOG.md. Any further updates to the 0.14.2 changelog will take place there.


Changelog for 0.14.2 (draft; updated Nov 4, 2019)

"Greenspun" edition:

I think that with the arrival of conditions and restarts, it is now fair to say unpythonic contains an ad-hoc, informally-specified, slow implementation of half of Common Lisp. To avoid bug-ridden, we have tests - but it's not entirely impossible for some to have slipped through.

This release welcomes the first external contribution. Thanks to @aisha-w for the much improved organization and presentation of the documentation!

Language version:

Rumors of the demise of Python 3.4 support are exaggerated. While the testing of unpythonic has moved to 3.6, there neither is nor will there be any active effort to intentionally drop 3.4 support until unpythonic reaches 0.15.0.

That is, support for 3.4 will likely be dropped with the arrival of the next batch of breaking changes. The current plan is visible in the roadmap as the 0.15.0 milestone.

If you still use 3.4 and find something in unpythonic doesn't work there, please file an issue.

New:

  • Improve organization and presentation of documentation (Separate documentation files. #28).
  • Macro README: Emacs syntax highlighting for unpythonic.syntax and MacroPy.
  • fix: Break infinite recursion cycles (for pure functions). Based on idea and original implementation by Matthew Might and Per Vognsen.
  • Resumable exceptions, a.k.a. conditions and restarts. One of the famous killer features of Common Lisp. Drawing inspiration from python-cl-conditions by Alexander Artemenko. See with restarts (RESTART-CASE), with handlers (HANDLER-BIND), signal, invoke_restart. Many convenience forms are also exported; see unpythonic.conditions for a full list. For an introduction to conditions, see Chapter 19 in Practical Common Lisp by Peter Seibel.
  • More batteries for itertools:
    • fixpoint: Arithmetic fixed-point finder (not to be confused with fix).
    • within: Yield items from iterable until successive iterates are close enough (useful with Cauchy sequences).
    • chunked: Split an iterable into constant-length chunks.
    • lastn: Yield the last n items from an iterable.
    • pad: Extend iterable to length n with a fillvalue.
    • interleave: For example, interleave(['a', 'b', 'c'], ['+', '*']) --> ['a', '+', 'b', '*', 'c']. Interleave items from several iterables, slightly differently from zip.
    • CountingIterator: Count how many items have been yielded, as a side effect.
    • slurp: Extract all items from a queue.Queue (until it is empty) into a list, returning that list.
    • map: Curry-friendly thin wrapper for the builtin map, making it mandatory to specify at least one iterable.
  • ulp: Given a float x, return the value of the unit in the last place (the "least significant bit"). At x = 1.0, this is the machine epsilon, by definition of the machine epsilon.
  • dyn now supports rebinding, using the assignment syntax dyn.x = 42. For an atomic mass update, see dyn.update.
  • box now supports .set(newvalue) to rebind (returns the new value as a convenience), and unbox(b) to extract contents. Syntactic sugar for rebinding is b << newvalue (where b is a box).
  • islice now supports negative start and stop. (Caution: no negative step; and it must consume the whole iterable to determine where it ends, if at all.)

Fixed:

@Technologicat
Copy link
Owner Author

Technologicat commented Oct 6, 2019

Merged with the megapost below.


  • The second famous killer feature of CL - connecting to a running Lisp app and monkey-patching it live - is powered by Swank, the server component of SLIME. See [0], [1], [2] and [3].

    For a Swank server for Python, see [4]. For one for Racket, see [5].

    As [3] says, if you intend to monkey-patch a running loop, that only works if the loop is in FP style, using recursion... since then overwriting the top-level name it's calling to perform the recursion will make new iterations of the loop use the updated code. This requires the implementation to have TCO. (In unpythonic, this setup is possible with @trampolined.)

    Of course, we don't have anything like SBCL's #'save-lisp-and-die, or indeed the difference between defvar (init only if it does not already exist) and defparameter (always init) (for details, see Chapter 6 in Peter Seibel's Practical Common Lisp). Python wasn't really designed, as a language, for the style of development where an image is kept running for years and hot-patched as necessary.

    • But there's ZODB [1] [2] [3] for persistent storage in Python. It can semi-transparently store and retrieve any Python object that subclasses persistent.Persistent; haven't tried that class as a mixin, though (would be useful for persisting unpythonic containers box, cons, and frozendict).

      • Here persistent means the data lives on disk; not to be confused with the other sense of "persistent data structures", as in immutable ones, as in pyrsistent.

      • Only semi-transparently, because you have to assign the object into the DB instance to track it, and transaction.commit() to apply pending changes. Explicit is better than implicit.

      • But any data stored under the DB root dict (recursively) is saved. So it's a bit like our dyn, a special place into which you can store attributes, and which plays by its own rules.

        • Note data, not code; ZODB uses pickle under the hood, so functions are always loaded from their most recent definitions on disk.
        • ZODB is essentially pickle on ACID: atomicity, consistency, isolation, durability.
      • Possibly useful ZODB trivia:

        • Transactions have also a context manager interface; modern style is to use that.
        • To DB root exposes both a dict-like interface as well as an attribute-access interface. So dbroot['x'] = x and dbroot.x = x do the same thing; the second way is modern style. The old way is useful mainly when the key is not a valid Python identifier.
        • Think of the DB root as the top-level namespace for persistent storage. Place your stuff into a container object, and store only that at the top level, so that databases for separate applications can be easily merged later if the need arises. Also, the perfromance of the DB root isn't tuned to store a large number of objects directly at the top level. To have a scalable container, look into the various BTrees in ZODB.
        • Attributes beginning with _v_ are volatile, i.e. not saved. They may vanish between any two method invocations if the object instance is in the saved state, because ZODB may choose to unload saved instances at any time to conserve memory.
        • Attributes beginning with _p_ are reserved for ZODB. Set x._p_changed = True to force ZODB to consider an object instance x as modified.
          • Useful e.g. when x.foo is a builtin list that was mutated by calling its methods. Otherwise ZODB won't know that x has changed when we x.foo.append("bar"). Another way to signal the change to ZODB is to rebind the attribute, x.foo = x.foo, with the usual consequences.
        • If your class subclasses Persistent, it's not allowed to later change your mind on this (i.e. make it non-persistent), if you want the storage file to remain compatible. See It should be easier to change one's mind about persistence zopefoundation/ZODB#99
        • Be careful when changing which data attributes your classes have; this is a database schema change and needs to be treated accordingly. (New ones can be added, but if you remove or rename old ones, the code in your class needs to account for that, or you'll need to write a script to migrate your stored objects as an offline batch job.)
        • The ZODB docs were unclear on the point and there was nothing else on it on the internet, so tested: ZODB seems to handle properties correctly.
          • The property itself is recognized as a method. Only raw data attributes are stored into the DB.
          • After an object instance is loaded from the DB, reading a property will unghost it, just like reading a data attribute.
        • Beware of persisting classes defined in __main__, because the module name must remain the same when the data is loaded back (as mentioned in the tutorial).
        • Tutorial says: Non-persistent objects are essentially owned by their containing persistent object and if multiple persistent objects refer to the same non-persistent subobject, they’ll (eventually) get their own copies.
          • So beware, anything that should be preserved up to object identity and relationships should be made a persistent object. (There are persistent list and dict types in ZODB.)
  • The third killer feature of Common Lisp, compiling to machine code that rivals C++ in performance, is unfortunately not available for Python at this time. :)

@Technologicat
Copy link
Owner Author

Technologicat commented Oct 25, 2019

@Technologicat
Copy link
Owner Author

Technologicat commented Oct 28, 2019

Merged with reading links below.


Just archiving some links to reading...

  • Olin Shivers (1998) on 80% and 100% designs
  • Faré Rideau (2012): Consolidating Common Lisp libraries
  • Common Lisp style guide
  • Some opinions on modularity [1] [2]
  • Stefan Ram summarizing the subtleties of defining referential transparency (link from this discussion).
  • Oleg Kiselyov (2007): Dynamic binding, binding evaluation contexts, and (delimited) control effects. Could be interesting to be able to refer to previous (further up the stack) values of a dynamically bound variable.
  • SICP is now an internet meme. Maybe link to this one.
  • Maybe mention about the double-import shared-resource decorator trap in doc/design-notes.md. In the Pyramid web framework documentation:
    • Module-localized mutation is actually the best-case circumstance for double-imports. If a module only mutates itself and its contents at import time, if it is imported twice, that's OK, because each decorator invocation will always be mutating an independent copy of the object to which it's attached, not a shared resource like a registry in another module. This has the effect that double-registrations will never be performed.
    • In case of unpythonic, e.g. dynassign only sets its own state, so it should be safe. But regutil.register_decorator is potentially dangerous, specifically in that if the same module is executed once as __main__ (running as the main app) and once as itself (due to also getting imported from another module), a decorator may be registered twice. (It doesn't cause any ill effects, though, except for a minor slowdown, and the list of all registered decorators not looking as clean as it could.)
  • sans-io.
  • Clean Architecture.
    • In a nutshell, turning dependencies upside down. Push any platform-specific details to the edges of your system. Keep your core business logic free of dependencies. An outer part is allowed to depend on an inner one, but not the other way around.
    • Requires a bit more glue code than the traditional approach, but allows easily switching out platform-specific components.
    • E.g. your database glue code should depend on your business logic; but the business logic should assume nothing about a database.
  • PyPy3, fast, JIT-ing Python 3 that's mostly a drop-in replacement for CPython 3.6. MacroPy works, too.
  • Brython: Python 3 in the browser, as a replacement for JavaScript.
    • No separate compile step - the compiler is implemented in JS. Including a script tag of type text/python invokes it.
    • Doesn't have the ast module, so no MacroPy.
    • Also quite a few other parts are missing, understandably. Keep in mind the web client is rather different as an environment from the server side or the desktop. So for new apps, Brython is ok, but if you have some existing Python code you want to move into the browser, it might or might not work.

@Technologicat Technologicat added the documentation Non-executable English, for humans label Oct 29, 2019
@Technologicat
Copy link
Owner Author

Technologicat commented Oct 31, 2019

The 0.14.2 documentation update megapost - Updated 7 Aug, 2020

Remaining documentation TODOs for 0.14.2. Done and cancelled ones struck out.

  • Mention in the top-level README that fploop is TCO'd, so you can run arbitrarily long loops with it. Otherwise the feature would hardly be worth a README mention. Done.
  • Emphasize the obsession on correctness. For example, memoize caches exceptions too. Done.
  • Advertise the REPL server. Currently only one easily missed link in the top-level TOC leads to it. Done.
  • Bump license year to 2020. Done.
  • Add contribution guidelines (a.k.a. HACKING.md). Rough first cut done.
  • Finalize contribution guidelines. Done.
    • Add official link for argument vs. parameter. Done.
    • Copyediting. Neutral tone. Done (history will judge whether successful).
  • Overhaul the start of the main README again: Done.
    • Move the explanation of macro magic to the section with the links to the detailed documentation.
    • The REPL server note doesn't fit the tone of the first paragraph. Unify the tone.
    • Something not working as advertised? Missing a feature? Documentation needs improvement? Issue reports as well as pull requests are welcome.
      • For convenience, provide links to the issue listing, and to the create new issue page.
      • Emphasize that getting a response may take a while, depending on which project I'm working on at the moment. While unpythonic is intended as a serious tool for productivity as well as for teaching (language concepts from outside the Python community, as well as metaprogramming techniques by example), right now work priorities mean that it's developed and maintained on whatever time I can spare for it.
  • Add a subsection Symbols and singletons. Done.
    • Document unpythonic.symbol.sym and unpythonic.symbol.gensym.
      • sym is the interned symbol type, like Lisp's symbols. The name determines object identity.
      • gensym makes new, unique uninterned symbols (class: gsym), with a pseudo-random UUID and a human-readable label. The UUID determines object identity. Like Lisp's gensym and JavaScript's Symbol.
        • A gensym never conflicts with any named symbol, even if one takes the UUID from it and uses that in place of the name of a named symbol. Documented.
      • Both types of symbols survive a pickle roundtrip. Instantiation is thread-safe.
      • For now, see the docstrings for sym and gensym, and unit tests in unpythonic/test/test_symbol.py for usage examples.
    • Document unpythonic.singleton.Singleton, a singleton abstraction with thread-safe instantiation that interacts properly with pickle.
  • Emphasize (both in doc/repl.md and imacropy docs) the selling points of imacropy.console.MacroConsole: Done in doc/repl.md. Now just the imacropy docs. Done too.
    • It catches and reports import errors when importing macros.
    • It allows importing the same macros again, to refresh their definitions.
      • When you from somemod import macros, ..., the console automatically first reloads somemod, so that a macro import always sees the latest definitions.
    • It makes viewing macro docstrings easy.
      • When you import macros, beside loading them into the macro expander, the console automatically imports the macro stubs as regular runtime objects. They're functions, so just look at their __doc__.
      • This also improves UX. Without loading the stubs, from unpythonic.syntax import macros, let, would not define the name let at runtime. Now it does, with the name pointing to the macro stub.
  • Document the condition system (all the information is already written down as docstrings, just need a coherent narrative). First cut done in 63fb14d.
    • cerror, when a handler invokes its proceed restart, is a bit like contextlib.suppress, except it continues right from the next statement. Documented.
    • Add conditions/restarts to demo once documented. Done.
      • But the demo could be more convincing. Conditions only shine when restarts are set up at multiple levels of the call stack. But how to have a short example for that? Difficult to have a better demo. Now this is at least documented.
        • The problem with the single-level case is that it could be implemented as a error-handling mode parameter for the example's only low-level function.
        • With multiple levels, it becomes apparent that this parameter must be threaded through each level... unless we store the parameter in dyn.
        • But then, there can be several types of errors (just like exceptions), and the error-handling mode parameters (a separate one for each error type) have to be shepherded in an intricate manner. A stack is needed, so that an inner level may temporarily override the handler for a particular error type...
        • Enter the condition system, which automatically scopes handlers to their dynamic extent, and manages the handler stack automatically. It is the clean general solution to dynamically bind error-handling modes (for several types of errors, if desired) in a controlled, easily understood manner. The local programmability (i.e. the fact that a handler is not just a restart name, but an arbitrary function) is a bonus for additional flexibility.
        • If this sounds a lot like an exception system, that's because conditions are the supercharged sister of exceptions. The exception model conflates mechanism and policy. The condition model cleanly separates those, while otherwise remaining somewhat similar.
  • Improve demo of box. Show where it's needed - i.e. as a semantically explicit replacement for the single-item list hack, when a function needs to rebind its argument in such a way that the rebinding takes effect also in the caller (and cannot simply use nonlocal or global due to the original name not being in lexical scope). Done.
    • Have two top-level functions, f and g, where f calls g, and g wants to have the side effect of effectively rebinding a local variable of f...
    • Whether doing that is a good idea is a separate matter; the point is to demonstrate a simple instance of the problem class for which box is the solution.
  • box: deprecate accessing .x directly. Doesn't work with the new ThreadLocalBox. Now we have .get() to retrieve the value in an OOP way (though unbox(b) instead of b.get() is the recommended API). Done.
  • Document ThreadLocalBox. Like box, but the contents are thread-local. Done.
  • Document Shim. A Shim holds a box (or a ThreadLocalBox), and redirects attribute accesses on the shim to whatever object happens to currently be in the box. (E.g. this can combo with ThreadLocalBox to redirect stdin/stdout only in particular threads. Put the stream object in a ThreadLocalBox, then shim that, then assign the shim to replace sys.stdin...) Done.
  • Document async_raise. It can be used to inject asynchronous exceptions such as KeyboardInterrupt into another thread. Done.
    • Original detective work by Federico Ficarelli and LIU Wei. (Already added to AUTHORS.md.) Documented.
      • Raising async exceptions is a documented feature of Python's public C API, but it was never meant to be invoked from within pure Python code. But then the CPython devs gave us ctypes.pythonapi, which allows access to Python's C API from within Python. (If you think ctypes.pythonapi is too quirky, the pycapi PyPI package smooths over the rough edges.) Combining the two gives async_raise without the need for a C extension. Unfortunately PyPy doesn't currently (Jan 2020) implement this function in its CPython C API emulation layer, cpyext. Documented.
    • We need this for triggering KeyboardInterrupt in a remote REPL session (inside the session thread, which can be, at that time, stuck waiting in interact()), when the user presses Ctrl+C at the client side. This and similar awkward situations in network programming are pretty much the only legitimate use case for this. Documented.
  • Document unpythonic.net.PTYSocketProxy. This plugs a PTY between some Python code that expects to run in a terminal, and a network socket. Unlike many examples on the internet, this one doesn't use pty.spawn, so the slave doesn't need to be a separate process. Done.
  • Document unpythonic.net.server and unpythonic.net.client, which are unpythonic's way to allow the user to connect to a running Python process and modify its state (à la Swank in Common Lisp). Done, doc/repl.md.
    • See Finish implementing REPL server/client #56 for status on this feature.
    • CAUTION: As usual, the legends are exaggerated; making full use of such a feature requires foresight, adhering to a particular programming style. Documented.
      • Particularly, you can mutate only things which you can refer to (in any way, also indirectly) through the top-level namespace. If the faulty logic you need to hot-patch happens to be inside a closure, tough luck - the only way is then to replace the thing that produces the closures, and re-instantiate the closure. Documented.
      • It's impossible to patch a running loop - unless it's an FP loop defined at the top level, in which case it's possible to rebind the function name that refers to the loop body. Documented.
      • Using importlib.reload, it's possible to tell the running process to reload arbitrary modules from disk. But if someone has from-imported anything from them, tough luck - the from-import will refer to the old version, unless you reload the module that did the from-import, too. (Good luck catching all of them.) Documented.
      • Finally, keep in mind that if you replace a class definition, any existing instances will still use the old definition. OTOH, that's exactly what ActiveState recipe 160164 is for. Documented.
      • We don't have anything like save-lisp-and-die, so it's not like "Lisp image based programming" as in CL. If you need to restart, you need to restart normally. So in Python, never just hot-patch; always change your definitions on disk, so your program will run with the new definitions the next time it's cold-booted. Once you're done testing, then reload those definitions in the live process, if you need/want to. Documented.
  • Mention that continuations has some limitations, and in its present state, is useful mainly for teaching continuations in a Python setting. Documented.
    • Especially, when an exception is raised, the continuation is aborted, and regular Python control flow rules are followed. This is often fine, but it does prevent continuations from working with some patterns of control transfer via exceptions. Especially, restarts fail to honor the stored continuation. (Unwind-protecting with try/finally is fine. Context management with with blocks is fine, too.)
    • In a language with native continuations (Scheme, Racket), exceptions would be built on top of them, allowing them to interoperate more readily. But Python already has a separate exception system, which knows nothing about continuations. (And there's no way to know statically whether an exception raised in a with continuations block will be caught in such a block, or outside.)
    • Wait, what was I thinking? That's exactly how it should work! Raising an exception, or signaling and restarting, is expected to partly unwind the call stack, so the continuation from the level that raised the exception is expected to be cancelled. Documented.
    • Still, there are seams between continuation-enabled code and regular code. Documented.
    • In the README, emphasize that our continuations are general: not single-shot, like generators or async coroutines, but can resume multiple times from the same point. Documented.
  • Mention that our macros expect a from-import style for detecting uses of unpythonic constructs, even when those constructs are functions. This has always been the case, but I've forgotten to mention it in the docs. (All the code examples say it between the lines, kind of - they only use from-imports.) Documented at the start of doc/macros.md.
    • Macros are always from-imported as of MacroPy 1.1.0b2, but functions might not be, in general. Maybe doesn't need a mention.
    • Specifically, they expect certain bare names such as curry to refer to functions imported from unpythonic. A full list would be useful here, but to produce that I may have to dig through 2k lines of macro code (excluding comments, docstrings and blank lines). And flag the relevant places to easily find them again later. Mentioned curry specifically, but let's build the full list later. It's a lot of code to dig through - manually, because there is no unifying factor to grep for.
    • From-import seems to be closer to Racket and Common Lisp style than prefixing the namespace at each use site. In Python both styles are common. Maybe doesn't need a mention.
  • In the documentation for @looped_over, mention that any unpacking of the input item must be performed manually, because tuple parameter unpacking was removed from the language in Python 3.0. See PEP 3113. Documented.
    • It seems that the original implementation of the feature caused certain technical issues (detailed in the PEP), and it was not widely used. It is somewhat curious that re-engineering the implementation to overcome those issues was not even suggested in the PEP. Personally, I find it a bit silly that a for-loop can unpack tuples, but a def or a lambda cannot, making the language more irregular than it needs to be (both for and def/lambda create bindings). I suppose this is an instance of practicality beats purity.
    • JavaScript (technically, ECMAScript) does support it. It's cool, regardless of what one may think of the rest of JavaScript. (ES6 and later aren't that bad, except historical baggage. Latest spec.)
  • Elaborate on fix. Add a new subsection after the one on curry and reduction rules. Done in f73959e.
  • Add namelambda to demo. Done.
  • Add looped to demo. Done.
  • Check that the numerics.py example is up to date, update if necessary. Added some comments for other ways to do certain things.
  • Strictly speaking, what the comments in our macro code term as a literal list, the Python reference §6.2.5 terms as a list display. Similarly for sets and dicts. Unify the terminology, or at least add a note.
    • Our ll(...) plays the role of a linked list display.
    • Upon a closer look, no, actually, our notion of container literals and Python's notion of list/set/dictionary displays are subtly different. Displays are allowed to contain comprehensions and starred items (unpacking), whereas we don't allow those in order for a display to count as a literal. We could allow starred items in a future version, but the value of a comprehension (in general) can only be determined at runtime, so those are out for what we mean by container literals.
      • For the case of lazyrec, see the lazyrec[] syntax transformer and the function is_literal_container in unpythonic.syntax.lazify.
      • For the case of the implicit do[] (extra bracket syntax), see the syntax transformer implicit_do in unpythonic.syntax.letdo.
  • Python 3.8 prohibits yield and yield from in a genexpr, see §6.2.8. If our usage examples or unit tests have any, fix them. Checked, everything is peachy.
  • Our env is a bit like types.SimpleNamespace, but with additional features. Mention this. Done.
  • Mention Clojure's trampoline in TCO description, it's a design very similar to ours, in a Lisp that doesn't have TCO from the underlying implementation. Done in 5d6337a.
  • Need a logical place to archive miscellaneous notes: doc/readings.md.
    • Links to reading Dumped there.
    • SRFI-45 promises. Maybe say something about them and their relation to MacroPy promises.
    • Now this remark sounds interesting. Retroactively changing an object's type in Python, like in CLOS? Definitely need to try this at some point.
      • Michael Hudson got there first (2002, 2004): ActiveState Python recipe 160164: Automatically upgrade class instances when you redefine the class. Triggered on module reloads, too. To decide what class to upgrade, it looks for a class of the same name in the scope that defined the new class. Then there's an instance tracker that keeps weakrefs to the existing instances of the old class.
        • Hmm, couldn't we just gc.get_objects() and filter for what we want to find the instances to be updated? (Provided we still had a reference to the old class object.)
    • Killer features of Common Lisp doc/design-notes.md.
      • Conditions and restarts, check!
      • Hot-patching, check!
        • The second famous killer feature of CL - connecting to a running Lisp app and monkey-patching it live - is powered by Swank, the server component of SLIME. See [0], [1], [2] and [3].
        • We provide unpythonic.net.server and unpythonic.net.client for hot-patching. It doesn't talk with SLIME, but perhaps Python doesn't need to. The important point (and indeed the stuff of legends) is being able to connect to a running process and change things as needed. Being able to do that is an obvious expected feature of any serious dynamic language. (Indeed both Common Lisp and Python can do it, with the appropriate infrastructure as a library.)
        • Also a Swank server for Python exists, but I haven't tested it. There's also another one for Racket.
        • As for Swank in CL, see server setup, SLIME and swank-client.
        • As [3] above says, if you intend to monkey-patch a running loop, that only works if the loop is in FP style, using recursion... since then overwriting the top-level name it's calling to perform the recursion will make new iterations of the loop use the updated code. This requires the implementation to have TCO. (In unpythonic, this setup is possible with @trampolined.) Documented.
        • Of course, we don't have anything like SBCL's #'save-lisp-and-die, or indeed the difference between defvar (init only if it does not already exist) and defparameter (always init) (for details, see Chapter 6 in Peter Seibel's Practical Common Lisp). Python wasn't really designed, as a language, for the style of development where an image is kept running for years and hot-patched as necessary. Documented.
        • But then there's ZODB [1] [2] [3] for persistent storage in Python. It can semi-transparently store and retrieve any Python object that subclasses persistent.Persistent; haven't tried that class as a mixin, though (would be useful for persisting unpythonic containers box, cons, and frozendict). Documented in doc/repl.md.
          • Here persistent means the data lives on disk; not to be confused with the other sense of "persistent data structures", as in immutable ones, as in pyrsistent.
          • Only semi-transparently, because you have to assign the object into the DB instance to track it, and transaction.commit() to apply pending changes. Explicit is better than implicit.
          • But any data stored under the DB root dict (recursively) is saved. So it's a bit like our dyn, a special place into which you can store attributes, and which plays by its own rules.
            • Note data, not code; ZODB uses pickle under the hood, so functions are always loaded from their most recent definitions on disk.
            • ZODB is essentially pickle on ACID: atomicity, consistency, isolation, durability.
          • Possibly useful ZODB trivia:
            • Transactions have also a context manager interface; modern style is to use that.
            • The DB root exposes both a dict-like interface as well as an attribute-access interface. So dbroot['x'] = x and dbroot.x = x do the same thing; the second way is modern style. The old way is useful mainly when the key is not a valid Python identifier.
            • Think of the DB root as the top-level namespace for persistent storage. Place your stuff into a container object, and store only that at the top level, so that databases for separate applications can be easily merged later if the need arises. Also, the perfromance of the DB root isn't tuned to store a large number of objects directly at the top level. To have a scalable container, look into the various BTrees in ZODB.
            • Attributes beginning with _v_ are volatile, i.e. not saved. They may vanish between any two method invocations if the object instance is in the saved state, because ZODB may choose to unload saved instances at any time to conserve memory.
            • Attributes beginning with _p_ are reserved for ZODB. Set x._p_changed = True to force ZODB to consider an object instance x as modified.
              • Useful e.g. when x.foo is a builtin list that was mutated by calling its methods. Otherwise ZODB won't know that x has changed when we x.foo.append("bar"). Another way to signal the change to ZODB is to rebind the attribute, x.foo = x.foo, with the usual consequences.
            • If your class subclasses Persistent, it's not allowed to later change your mind on this (i.e. make it non-persistent), if you want the storage file to remain compatible. See It should be easier to change one's mind about persistence zopefoundation/ZODB#99
            • Be careful when changing which data attributes your classes have; this is a database schema change and needs to be treated accordingly. (New ones can be added, but if you remove or rename old ones, the code in your class needs to account for that, or you'll need to write a script to migrate your stored objects as an offline batch job.)
            • The ZODB docs were unclear on the point and there was nothing else on it on the internet, so tested: ZODB seems to handle properties correctly.
              • The property itself is recognized as a method. Only raw data attributes are stored into the DB.
              • After an object instance is loaded from the DB, reading a property will unghost it, just like reading a data attribute.
            • Beware of persisting classes defined in __main__, because the module name must remain the same when the data is loaded back (as mentioned in the tutorial).
            • Tutorial says: Non-persistent objects are essentially owned by their containing persistent object and if multiple persistent objects refer to the same non-persistent subobject, they’ll (eventually) get their own copies.
              • So beware, anything that should be preserved up to object identity and relationships should be made a persistent object. (There are persistent list and dict types in ZODB.)
      • Compiling to native machine code?
        • Not in CPython's design goals.
        • Cython does it, but essentially requires keeping to a feature set easily compilable to C, not just some gradual type-tagging like in typed/racket or CL, plus compiler hints like in CL.
        • PyPy seems the best option. It JIT-compiles arbitrary Python code into native machine code. PyPy3 has essentially the same feature set as CPython 3.6.9.
          • Note that PyPy speeds up Python-heavy sections of code (the simpler the better; more amenable for analysis by the JIT), but interfacing with C extensions tends to be a bit slower in PyPy than in CPython, due to requiring an emulation layer for the CPython C API. Some core assumptions of PyPy are different enough from CPython (e.g. no reference counting; objects may move around in memory) that emulating the CPython semantics makes the emulation layer rather complex.
          • Due to being a JIT, doesn't speed up small one-shot programs; the code should have repetitive sections (such as loops), and run for at least a few seconds for the JIT to warm up. This is pretty much the MATLAB execution model, for Python (whereas CL performs ahead-of-time compilation).
          • PyPy itself is not the full story; their RPython toolchain can automatically produce a JIT for an interpreter for any new dynamic language implemented in RPython. Now that's higher-order magic if anything is.
      • These notes could go in doc/design-notes.md as Common Lisp, Python and productivity. Done, and slightly updated.
        • While at it, provide an angle on PG's Lisp essays now, almost 20 years later. They're well written, and have provided a lot of exposure for Lisp.
        • The base abstraction level of programming languages, even those in popular use, has increased since those essays were written. (The trend was visible already then, and was indeed noted in the essays.) The focus on low-level languages such as C++ has decreased. Java is still popular, but high-level FP languages that compile to JVM bytecode (Kotlin, Scala, Clojure) are rising.
        • Python has become highly popular, and is now closer to Lisp than it was 20 years ago, especially after MacroPy introduced syntactic macros to Python (in 2013, according to the git log). It wasn't bad as a Lisp replacement even back in 2000 - see Peter Norvig's essay Python for Lisp Programmers. Some more historical background, specifically on lexically scoped closures, in PEP 3104, PEP 227, and Historical problems with closures in JavaScript and Python.
        • In 2020, does it still make sense to learn the legendary CL? To know exactly what it has to offer, yes. As baroque as some parts are, there are a lot of great ideas there. Conditions are one. CLOS is another. More widely, in the ecosystem, Swank is one. Having more perspectives at one's disposal makes one a better programmer. But as a practical tool? Is it hands-down better than Python? Maybe no. Python has delivered on 90% of the productivity promise of Lisp. Both languages cut down significantly on accidental complexity. Python has a huge library ecosystem. MacroPy, unpythonic and pydialect are trying to push the language-level features a further 5%. (A full 100% is likely impossible when extending an existing language; if nothing else, there will be seams.)
    • Considering TAGBODY / GO (maybe just raise this as an enhancement issue) Tracked in Pythonify TAGBODY/GO from Common Lisp #45.
  • Handle the related issues; see any that have the label documentation in the milestone 0.14.2. Done.
  • Add example for memoize to doc/features.md, oddly it's missing one. Done.
  • Add examples for all new features, right in the documentation (we have some in unit tests already).
    • The new features of 0.14.2 should already be listed in CHANGELOG.md, so we only need to check that features.md includes a mention and example of each.
      • For anything listed in the 0.14.2 changelog, seems to be fine.
      • For anything added earlier, it should be fairly complete... but in case not, as the DDR announcer once famously said, 明日があるさ! (Sometimes even in English, There's always tomorrowww~~~~!)
  • "No other control flow forms"? Building a language this close to Lisp is an open invitation... challenge accepted!
    • Leaving this silly sentiment out for now.
  • Check the tone of our criticisms of Python, that we don't come down too harshly on it - for the most part, the language is excellent. (If it wasn't, it wouldn't have made sense to build this whole thing - maybe we should mention that.) Especially the focus on readability. The vertical-space-saving indentation-sensitive surface syntax is nice minor detail, too. As is the general with protocol.
    • Not release-blocking - maybe later. Should be fine-ish already.
  • Check the new parts of the documentation one more time. And ping aisha-w about a proofreading (may need to gather commit IDs, there have been quite many unrelated changes).
    • Not release-blocking, and they already did a good job earlier. If aisha-w still wants to check the new parts of the docs, that could go in 0.14.3, which is likely to feature much fewer changes than the mega-update 0.14.2.
  • Final check: make sure CHANGELOG.md is up to date.
    • Yes, it is.

@Technologicat Technologicat changed the title Update README for 0.14.2 Update docs for 0.14.2 Nov 13, 2019
@Technologicat
Copy link
Owner Author

Technologicat commented Nov 13, 2019

An updated version of this text has been incorporated to doc/features.md.


Condition system - draft:

Following Peter Seibel (Practical Common Lisp, chapter 19), we define errors as the consequences of Murphy's Law. As we already know, an exception system splits error-recovery responsibilities into two parts. In Python terms, we speak of raising and then handling an exception. The main difference is that a condition system, instead, splits error-recovery responsibilities into three parts: signaling, handling and restarting.

Why would we want to do that? The answer is improved modularity. Consider separation of mechanism and policy. We can place the actual error-recovery code (the mechanism) in restarts, at the inner level (of the call stack) - which has access to all the low-level technical details that are needed to actually perform the recovery. We can provide several different canned recovery strategies - generally any appropriate ways to recover, in the context of each low- or middle-level function - and defer the decision of which one to use (the policy), to an outer level. The outer level knows the big picture - why the inner levels are running in this particular case, i.e. what we are trying to accomplish and how. Hence, it is in the ideal position to choose which error-recovery strategy is appropriate in that context.

Fundamental signaling protocol

Generally a condition system operates as follows. A signal is sent (outward on the call stack) from the actual location where the error was detected. A handler at any outer level may then respond to it, and execution resumes from the restart invoked by the handler.

This sequence of catching a signal and invoking a restart is termed handling the signal. Handlers are searched in order from innermost to outermost on the call stack.

In general, it is allowed for a handler to fall through (return normally); then the next outer handler for the same signal type gets control. This allows chaining handlers to obtain their side effects (such as logging). This is occasionally referred to as canceling, since as a result, the signal remains unhandled.

Viewed with respect to the call stack, the restarts live between the (outer) level of the handler, and the (inner) level where the signal was sent from. When a restart is invoked, the call stack unwinds only partly. Only the part between the location that sent the signal, and the invoked restart, is unwound.

High-level signaling protocols

We actually provide four signaling protocols: signal (i.e. the fundamental protocol), and three that build additional behavior on top of it: error, cerror and warn.

If no handler handles the signal, the signal(...) protocol just returns normally. In effect, with respect to control flow, unhandled signals are ignored by this protocol. (But any side effects of handlers that caught the signal but did not invoke a restart, still take place.)

The error(...) protocol first delegates to signal, and if the signal was not handled by any handler, then raises ControlError as a regular exception. (Note the Common Lisp ERROR function would at this point drop you into the debugger.) The implementation of error itself is the only place in the condition system that raises an exception for the end user; everything else (including any error situations) uses the signaling mechanism.

The cerror(...) protocol likewise makes handling the signal mandatory, but allows the handler to optionally ignore the error (sort of like ON ERROR RESUME NEXT in some 1980s BASIC variants). To do this, invoke the proceed restart in your handler; this makes the cerror(...) call return normally. If no handler handles the cerror, it then behaves like error.

Finally, there is the warn(...) protocol, which is just a lispy interface to Python's warnings.warn. It comes with a muffle restart that can be invoked by a handler to skip emitting a particular warning. Muffling a warning prevents its emission altogether, before it even hits Python's warnings filter.

If the standard protocols don't cover what you need, you can also build your own high-level protocols on top of signal. See the source code of error, cerror and warn for examples (it's just a few lines in each case).

Notes

The name cerror stands for correctable error, see e.g. CERROR in the CL HyperSpec. What we call proceed, Common Lisp calls CONTINUE; the name is different because in Python the function naming convention is lowercase, and continue is a reserved word.

If you really want to emulate ON ERROR RESUME NEXT, just use Exception (or Condition) as the condition type for your handler, and all cerror calls within the block will return normally, provided that no other handler handles those conditions first.

Conditions vs. exceptions

Using the condition system essentially requires eschewing exceptions, using only restarts and handlers instead. A regular exception will fly past a with handlers form uncaught. The form just maintains a stack of functions; it does not establish an exception handler. Similarly, a try/except cannot catch a signal, because no exception is raised yet at handler lookup time. Delaying the stack unwind, to achieve the three-way split of responsibilities, is the whole point of the condition system. Which system to use is a design decision that must be made consistently on a per-project basis.

Be aware that error-recovery code in a Lisp-style signal handler is of a very different nature compared to error-recovery code in an exception handler. A signal handler usually only chooses a restart and invokes it; as was explained above, the code that actually performs the error recovery (i.e. the restart) lives further in on the call stack. An exception handler, on the other hand, must respond by directly performing error recovery right where it is, without any help from inner levels - because the stack has already unwound when the exception handler gets control.

Hence, the two systems are intentionally kept separate. The language discontinuity is unfortunate, but inevitable when conditions are added to a language where an error recovery culture based on the exception model (of the regular non-resumable kind) already exists.

CAUTION: Make sure to never catch the internal InvokeRestart exception (with an exception handler), as the condition system uses it to perform restarts. Again, do not use catch-all except clauses!

If a handler attempts to invoke a nonexistent restart (or one that is not in the current dynamic extent), ControlError is signaled using error(...). The error message in the exception instance will have the details.

If this ControlError signal is not handled, a ControlError will then be raised as a regular exception; see error. It is allowed to catch ControlError with an exception handler.


That should be most of the narrative. Still need to include API docs.

Relevant LtU discussion.

Stroustrup on why he chose in C++ not to have a resume facility.


Harebrained idea: could we mix and match - make with handlers catch exceptions too? It may be possible to establish a handler there, since the only exception type raised by the condition system is InvokeRestart (and, well, ControlError, if something goes horribly wrong).

OTOH, not sure if that would be useful. The style of code that goes into a Lisp-style signal handler is different from what would go into an exception handler. A signal handler usually only chooses a restart and invokes it. The actual restart code lives (dynamically) further in. An exception handler, on the other hand, must respond by directly performing its "restart action" further out on the call stack, because the stack has already unwound when the handler gets control. Maybe it's better to keep the systems separate.

@Technologicat
Copy link
Owner Author

Technologicat commented Jan 28, 2020

Updated 17 Mar 2020

Links to relevant readings. Will likely Has become doc/readings.md. Any further updates will take place there.


  • Evelyn Woods, 2017: A comparison of object models of Python, Lua, JavaScript and Perl.
    • Useful reading for anyone interested in how the object models differ.
    • It can be argued Python is actually prototype-based, like JavaScript.
    • See also prototype.py (unfortunately 2.7; to run it in 3.x, would need at least replacing new with the appropriate classes from types).
  • Prototypes in OOP on Wikipedia. Actually a nice summary.
  • William R. Cook, OOPSLA 2009: On Understanding Data Abstraction, Revisited.
    • This is a nice paper illustrating the difference between abstract data types and objects.
    • In section 4.3: "In the 1970s [...] Reynolds noticed that abstract data types facilitate adding new operations, while 'procedural data values' (objects) facilitate adding new representations. Since then, this duality has been independently discovered at least three times [18, 14, 33]." Then: "The extensibility problem has been solved in numerous ways, and it still inspires new work on extensibility of data abstractions [48, 15]. Multi-methods are another approach to this problem [11]."
      • Multi-methods (as in CLOS's multiple dispatch?) seem nice, in that they don't enfore a particular way to slice the operation/representation matrix. Instead, one fills in individual cells as desired.
    • In section 5.4, on Smalltalk: "One conclusion you could draw from this analysis is that the untyped λ-calculus was the first object-oriented language."
    • In section 6: "Academic computer science has generally not accepted the fact that there is another form of data abstraction besides abstract data types. Hence the textbooks give the classic stack ADT and then say 'objects are another way to implement abstract data types'. [...] Some textbooks do better than others. Louden [38] and Mitchell [43] have the only books I found that describe the difference between objects and ADTs, although Mitchell does not go so far as to say that objects are a distinct kind of data abstraction."
  • Joel Spolsky, 2000: Things you should never do, part I
    • Classic, and still true:

      "We’re programmers. Programmers are, in their hearts, architects, and the first thing they want to do when they get to a site is to bulldoze the place flat and build something grand. We’re not excited by incremental renovation: tinkering, improving, planting flower beds.

      There’s a subtle reason that programmers always want to throw away the code and start over. The reason is that they think the old code is a mess. And here is the interesting observation: they are probably wrong. The reason that they think the old code is a mess is because of a cardinal, fundamental law of programming:

      It’s harder to read code than to write it."

  • Geoffrey Thomas, 2015: signalfd is useless
  • Martin Sústrik, 2012: EINTR and What It Is Good For
  • Nathaniel Smith, 2018: Notes on structured concurrency, or: Go statement considered harmful
    • Very insightful posting on the near isomorphism between classic goto, and classic approaches to handling async concurrency. Based on the analysis, he's built a more structured solution.
  • Olin Shivers (1998) on 80% and 100% designs
  • Faré Rideau (2012): Consolidating Common Lisp libraries
  • Common Lisp style guide
  • Some opinions on modularity [1] [2]
  • Stefan Ram summarizing the subtleties of defining referential transparency (link from this discussion).
  • Oleg Kiselyov (2007): Dynamic binding, binding evaluation contexts, and (delimited) control effects. Could be interesting to be able to refer to previous (further up the stack) values of a dynamically bound variable.
  • SICP is now an internet meme. Maybe link to this one.
  • Maybe mention about the double-import shared-resource decorator trap in doc/design-notes.md. In the Pyramid web framework documentation:
    • Module-localized mutation is actually the best-case circumstance for double-imports. If a module only mutates itself and its contents at import time, if it is imported twice, that's OK, because each decorator invocation will always be mutating an independent copy of the object to which it's attached, not a shared resource like a registry in another module. This has the effect that double-registrations will never be performed.
    • In case of unpythonic, e.g. dynassign only sets its own state, so it should be safe. But regutil.register_decorator is potentially dangerous, specifically in that if the same module is executed once as __main__ (running as the main app) and once as itself (due to also getting imported from another module), a decorator may be registered twice. (It doesn't cause any ill effects, though, except for a minor slowdown, and the list of all registered decorators not looking as clean as it could.)
  • sans-io, the right way to define network protocols.
  • Clean Architecture.
    • In a nutshell, turning dependencies upside down. Push any platform-specific details to the edges of your system. Keep your core business logic free of dependencies. An outer part is allowed to depend on an inner one, but not the other way around.
    • Requires a bit more glue code than the traditional approach, but allows easily switching out platform-specific components.
    • E.g. your database glue code should depend on your business logic; but the business logic should assume nothing about a database.
  • PyPy3, fast, JIT-ing Python 3 that's mostly a drop-in replacement for CPython 3.6. MacroPy works, too.
  • Brython: Python 3 in the browser, as a replacement for JavaScript.
    • No separate compile step - the compiler is implemented in JS. Including a script tag of type text/python invokes it.
    • Doesn't have the ast module, so no MacroPy.
    • Also quite a few other parts are missing, understandably. Keep in mind the web client is rather different as an environment from the server side or the desktop. So for new apps, Brython is ok, but if you have some existing Python code you want to move into the browser, it might or might not work, depending on what your code needs.
  • Counterpoint: Eric Torreborre (2019): When FP does not save us

@Technologicat
Copy link
Owner Author

Finally, everything necessary is done. Closing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Non-executable English, for humans enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant