Imprecise programming: How to stop worrying and love the bug May 3, 2011Posted by olvidadorio in Programming.
A bug walks into a program…
…Nay, it rather puts on a suit, and becomes a program!
In this article I suggest a novel approach to programming that’s more like poetry and hopefully a lot less sucky… Caution, it’s a rough draft, but I’m quite keenly interested in making it intelligible, so if it isn’t please hit me. So here goes the story:
This afternoon, I was sitting on the loo, letting the usual trivialities flow through my mind; visual programming and such, the usual. One of my thoughts was: How cool would it be to have a user-interface that returns to the expressiveness of command line interfaces and the days of programming languages as operating system. That once again makes the average user an applied programmer as opposed to a mere button pusher along the lines of “select from this range of three fuckin’ fabrics”, nothing more than a consumer of pre-programmed options.
So while sitting there trying to figure out what that kind of user-interface might look like this idea for a new programming language paradigm (re-)struck. This paradigm kind of goes in the direction of natural language interfaces but might also be combined with some visual approach. But let’s try to put it together:
Poetic programming or the imprecise language
An interpreted programming language that is not precise, where the interpreter doesn’t let you or make you fully control, predict or understand the syntax or actual execution steps of your program. This language gives you the freedom to do “not all things considered” style programming in exchange for giving up control of well-defined execution behavior. Essentially it treats your code not as a drab cookbook but as a deep and mysterious poem, full of unexplored subtleities.
..But you say: This is bullshit. I don’t care if it hurts! I wanna have control. Heaven forbid if things go horribly wrong?!
Well, then we need fault-tolerance built into the language. Because, coming to think of it, we need fault-tolerance anyway. We should constantly be monitoring and testing our applications’ behavior anyway. Because things are going to go wrong anyway, anyhow, anywhere, at any time, and preferably while you’d rather be doing something else…
So a few fault-tolerance characteristics:
- Undo everywhere. All actions are journaled and undoable if ever possible.
- A limited and access-controlled set of non-undoable actions
- Possibly “let-it-crash” supervision and process capsuling mechanisms as known from Erlang, Akka etc.
- Context, context, context. The interpreter basically performs anaphora and general reference resolution on the tokens used in the programmer’s instructions. This may be an unsolved problem – in general and in natural language – but since we are creating an iterable artificial language from scratch, reference resolution might actually be tractable. I imagine a system in which various methods could be plugged in and refined bit by bit. Maybe one could even create a kind of economy/evolution, with selection criteria and feedback. But fundamentally this is a part where the language must be very malleable and customizable.
- Side effects all over the place; While functional programming languages such as Haskell try to get rid of all side effects altogether in order to get safe programs, we take the opposite approach, put side-effects into everything. Let functions’ execution affect each other, this allows for context, more information, less work. And then make sure you can undo it when things go wrong.
- Case-based programming. Have various views in which to do things, these actions are automatically journaled and can then be recalled using incomplete references.
- parser: bottom-up “rewrite” (actually annotation) rules that can be applied iteratively while moving towards actions, which can throw up “questions” – methods of delivering values for unfilled arguments to pass to stored functions, i.e. action/function-application graphs.
- controlled execution of parsed actions (UI?)
- various views for the user to exercise control and supervise what the machine is actually doing, alert to problems and if necessary modify its behavior.
- means to create boundaries and constants. Set certain relations that should never change, facts that are true, states that should never be achieved, tests that should always pass.
- How to actually extract the tokens. None of this should be set in stone, but there at least should be a few starting points, maybe set up several flavors, one oriented around classical programming language tokens, another around cooking recipe style natural language.
- Interpreter adaption in running systems, how to do it so that you don’t have to take care of every little thing, while achieving good malleability.
- What user interfaces to chose for invoking, monitoring and interactive execution. I would lean towards a text/visual graph combination. The visual graph is good to set exact actions and monitor what the machine is doing. The textual interface lends itself to referencing stuff more vaguely and in a more sweeping manner, letting the interpreter do the work of filling in the gaps.
- Otherwise it’s just fuckin’ obvious.
The big challenge that needs to be solved is how to build a system that can execute new, unseen code based on stuff that has been done on the system before, and how this prior action has been invoked and described. Maybe some techniques from probabilistic natural language processing can help here.
- I understand that Lisp offers means to redefine the language in the language itself. Though not an expert there, enlighten me.
- Cucumber/Gherkin is a language for writing tests that is less of a language and more of an interpreter where the input tokens can be arbitrarily redefined to be natural-language style entities.