Complishon: efficient heuristics for code completion

A couple of weeks ago I started prototyping a new code completion backend. The one available in Pharo 8.0 is good-enough to complete variables and messages to literal objects. However, the accuracy of the completion was pretty low because auto completion candidates were sorted alphabetically, and this was even producing completion pauses for large images. This code completion prototype engine, that I originally baptised as Complishon, was based on two ideas: generators and heuristics.

Generators are objects that provide a stream-like interface to consume values generated sequentially. Better with an example.

stream := Generator on: [ :generator |
  generator yield: 1.
  generator yield: 'hello'.
].
stream next.
>>> 1
stream next.
>>> 'hello'

One cool thing about generators is that the generator code will block until we call next. This is particularly interesting when iterating several big collections: we only really iterate them if we need them. And they come by default in Pharo :).

Heuristics, on the other hand, are not funky Pharo objects that come in a library. Heuristics are objects that represent rules. In our case rules for auto completion. Here you have some example of heuristics:

  • chances are I want to autocomplete first the methods defined in my package
  • chances are that a variable called anOrderedCollection will have effectively an OrderedCollection instance

And so on.

If we encode enough of these little rules on objects, we could have a smart enough auto-completion engine that does not require complex algorithms or machine learning models. Not saying that these may not be good, just saying that I wanted a quick solution to make our life easier.

This post talks about the architecture inside the completion engine. What parts are inside it and how they relate. Turns out this prototype grew up and is making it to the big leagues in Pharo 9.0, where it will be called HeuristicsCompletion.

An architecture for lazy code completion

The heuristics completion engine is done out of three main components:

  • lazy fetchers implemented using generators,
  • a completion object that works as a result-set
  • and several completion heuristics that are chosen from the current AST node.

Fetchers

Fetchers are subclasses of CoFetcher. They need to define the method #entriesDo: aBlock iterating a collection. The fetcher framework will take care of creating a streaming API for it using generators. For example, a simple fetcher can be created and used with:

CoFetcher subclass: #MyFetcher
	instanceVariableNames: ''
	classVariableNames: ''
	package: 'Complishon-Core'
  
MyFetcher >> entriesDo: aBlock [
  1 to: 10000000000 do: aBlock
]

MyFetcher new next >>> 1.

Basic Fetchers

A simple generic fetcher works on a collection.

CoCollectionFetcher onCollection: #( a b b a c )

And an empty fetcher is useful to provide no results

CoEmptyFetcher new

Fetcher combinators

Fetchers can be combined by using simple combinators. A sequence of fetchers, created with the message #,, fetches first the elements in the first fetcher, then in the second one. This also means that the second fetcher will never be executed until the first one is completely consumed, which is an interesting property if it is expensive to calculate.

CoInstanceVariableFetcher new, CoGlobalVariableFetcher new

filter fetcher, created with the message #select:, decorates another fetcher and filters it with a block.

CoInstanceVariableFetcher new select: [ :e | "a condition..." ]

no-repetition fetcher, created with the message #withoutRepetition, decorates another fetcher and filters those elements that were already returned previously by himself.

aFetcher withoutRepetition

The Juice: Fetchers for code completion

In addition, the engine provides fetchers to iterate the following:

  • CoInstanceVariableFetcher: instance variables of a class
  • CoClassVariableFetcher: class variables of a class
  • CoClassImplementedMessagesFetcher: messages implemented by a class
  • CoGlobalVariableFetcher: global variables in an environment
  • CoMethodVariableFetcher: variables accessible to an AST node (in order)
  • CoPackageImplementedMessagesFetcher: messages sent in a package

For code completion, another combinator shows useful to iterate a fetcher up a class hierarchy: CoHierarchyFetcher. This hierarchy fetcher decorates a ClassBasedComplishonFetcher (i.e., a CoClassImplementedMessagesFetcher, a CoInstanceVariableFetcher or a CoClassImplementedMessagesFetcher), and can be created with the message #forHierarchy.

Completion ResultSet

The CoResultSet object stores a fetcher and lazily fetches and internally store objects on demand:

c := CoResultSet fetcher: (CoInstanceVariableFetcher new
    completionClass: aClass;
    yourself).

"The first time it will fetch 20 elements from the fetcher and store them"
c first: 20.

"The second time it will just return the already stored elements"
c first: 20.

"Now it will fetch some more objects to have its 25"
c first: 25.

The CoResultSet object allows one to:

  • explicit fetch using fetch: and fetchAll
  • retrieve the results using firstfirst: and results
  • filter those results using filterByString:
  • query it with hasMoreElements and notEmpty

Heuristics

When the autocompletion is invoked, it calculates how to autocomplete given the AST node where the caret is in the text editor. The following piece of code shows examples of heuristics implemented in the prototype.

Heuristics for messages

If the AST node is a message whose receiver is self, autocomplete all messages implemented in the hierarchy.

(CoClassImplementedMessagesFetcher new
    completionClass: aClass;
    forHierarchy) withoutRepetition

If the AST node is a message whose receiver is super, autocomplete all messages implemented in the hierarchy starting from the superclass.

(CoClassImplementedMessagesFetcher new
    completionClass: aClass superclass;
    forHierarchy) withoutRepetition

If the AST node is a message whose receiver is a class name like OrderedCollection, autocomplete all messages in the class-side hierarchy of that class.

(ClassImplementedMessagesComplishonFetcher new
    completionClass: (completionContext environmentAt: aRBMessageNode receiver name) classSide;
    forHierarchy) withoutRepetition

If the AST node is a message whose receiver is a variable that has type information in its name name, like anOrderedCollection, autocomplete all messages in the instance-side hierarchy of guessed class. Then continue with normal completion. There are two cases: variables starting with a such as aPoint and variables starting with an such as anASTCache.

completionContext
  environmentAt: aRBMessageNode receiver name allButFirst asSymbol
  ifPresent: [ :class |
    ^ (ClassImplementedMessagesComplishonFetcher new
        completionClass: class;
        forHierarchy), PackageImplementedMessagesComplishonFetcher new  ].

completionContext
  environmentAt: (aRBMessageNode receiver name allButFirst: 2) asSymbol
  ifPresent: [ :class |
    ^ (ClassImplementedMessagesComplishonFetcher new
        completionClass: class;
        forHierarchy), PackageImplementedMessagesComplishonFetcher new  ]

If all the above fail, autocomplete all messages used in the current package. Chances are we are going to send them again.

 ^ CoPackageImplementedMessagesFetcher new
    	complishonPackage: aPackage;
	yourself

Heuristics for Method signature

When autocompleting the name of a new method, chances are we want to override a method in the hierarchy, or to reimplement a method polymorphic with the ones existing in the current package.

self newSuperMessageInHierarchyFetcher,
    (CoPackageImplementedMessagesFetcher new
        complishonPackage: aPackage;
	yourself)
            withoutRepetition

Heuristics for variables

Variables accessed by an instance are first the ones in the method, then the ones in the hierarchy.

instanceAccessible := (CoMethodVariableFetcher new
        complishonASTNode: anAST;
	yourself),
            (CoInstanceVariableFetcher new
                completionClass: aClass)
			forHierarchy ].

Then, variables accessed by an instance are also the ones in the class variables and globals.

globallyAccessible := (CoClassVariableFetcher new
    completionClass: complishonContext complishonClass)
        forHierarchy,
            (CoGlobalVariableFetcher new
	        complishonEnvironment: anEnvironment;
		yourself).

What’s next?

Try it in the lastest Pharo 9.0 dev build!

We would like to have new ideas for heuristics.

For example, I’ve started integrating it with a fast type inferencer that looks at initialize methods, or with a fast type inferencer like roel typer which latest version for Pharo can be found on github.

Why not machine learning?

Statistical models?

How a modal window could break the event cycle: A debugging story

A couple of weeks ago I started an aggressive cleaning on Morphic, the graphics framework we currently use in Pharo, searching for a cheap and simple path to move the Pharo IDE to HiDPI screens. The cleanups involved mostly the removal of global state. For example, I transformed code like the following:

ActiveHand doSomething.

into something like:

self activeHand doSomething.

Morph >> activeHand [
  ^ self activeWorld activeHand
]

Turns out in the middle I generated (or actually unveiled?) a couple of bugs :). Yet, I was actually surprised that I did not generate MORE bugs. So this post is a story about debugging. Because I think we don’t discuss debugging enough, and inexperienced developers often have problems to find good strategies to debug.

The bug(z)

About two weeks ago I did two pull requests to Pharo cleaning a bit the mess on Morphic’s global state. These were PR 5916 and PR 5926. A week later a couple of bugs started to pop up. This story is about issue 5945 and issue 6003.

The insect in question was showing itself as a problem managing modal windows and events. When trying to rename a method protocol a pop-up opened up, and then no matter where you clicked it kept in popping-up again and again and again.

Original screenshot showing the problem by @Myroslava.
Taken from issue https://github.com/pharo-project/pharo/issues/5945
Original screenshot showing the problem by @Myroslava.
Taken from issue https://github.com/pharo-project/pharo/issues/5945

Finding the bug in the Dark

If you follow the discussion in the issue, you’ll find my first approach for it was to try reproduce it. And I couldn’t. Having an unreproducible bug is kind of a hazzle, because it makes me ask lots of questions, and not necessarily in the good direction. Is the bug dependent on @Myroslava’s environment? Is it Windows? Because I was on OSX. Was it that she was using an old(ish) Pharo version? Because I was in the latest dev one. A couple of ping-pong of questions gave me the information I needed, but I felt that this kind of information is needed for all bugs, and it’s really a time saver when you’re chasing a nasty cochroach.

That day (was it that day? I don’t remember, but allow me take some poetical licence here) I set-up a couple of github’s issue templates. And I’m happy a couple of weeks later to see that issue reports have got better 🙂

Now, coming back to the story, I had to explore under the carpet to find the bug, and I searched in many different places. First I tried comparing the behavior between the button in the status bar and the context menu. Then I tried to compare the behavior between the calypso browser and the message browser. Then within calypso, I compared the class search dialog with the offending dialog. This looked like a good approach, because the class search dialog was not showing the issue. But it turned out a dead end, because both dialogs are implemented in a **completely different* way… In the middle of the exploration, I saw the bug a couple of times, but I could not grasp what was causing it in some cases, and not in others.

I needed a reliable way to trigger it.

The epiphany

Then I got the “House MD” moment. The “Ah” that pops up in your head. But no vicodin involved. It turned out, that the bug only appeared from the control in the status bar.

And that control has three sub controls. A delete icon button, an edit icon button, and a label. The offending popup was being created when clicking the edit icon button, and when clicking on the label. The bug always happened when clicking on the label. Even more! The bug only happened when clicking on the label.

Gotcha!

A strategy to quickly catch the bug

Now I knew how to make the mouse go out of the wall. However, what we were seeing until now was just a bug symptom. I was not able to tell for sure if this bug was specific to this specific control in the calypso browser that was unveiled by my refactoring, if this was a more generalized problem, or if I did something wrong during my early refactorings. I did not know. So I had to find the underlying cause of the bug.

But I had two weapons at hand. I had two cases opening the same modal pop-up, one showing the bug an another one not showing it. So I proceeded to dissect them, and compare them. First attempt: I found the **single** place where the pop-up was being open, and I logged the stack at that point, to see if and how executions differ.

thisContext stack traceCr.

Turns out this first attempt gave me enough information to hypothetize about the issue! Let’s see the two stacks, first the one showing the bug (see it starts from LabelMorph >> click:), the second one not showing the bug.

ClyQueryBrowserContext(ClySystemBrowserContext)>>requestSingleMethodTag:suggesting: ClyMethodTagsAndPackageEditorMorph>>requestTag [
		self isExtensionActive 
			ifTrue: [ self requestPackage]
			ifFalse: [ self requestTag ]
	] in ClyMethodTagsAndPackageEditorMorph>>openEditor
BlockClosure>>on:do:
ClyMethodTagsAndPackageEditorMorph>>requestChangeBy:
ClyMethodTagsAndPackageEditorMorph>>openEditor
MorphEventSubscription>>notify:from: [ :s | result := result | ((s notify: anEvent from: sourceMorph) == true) ] in MorphicEventHandler>>notifyMorphsOfEvent:ofType:from: Set>>do: MorphicEventHandler>>notifyMorphsOfEvent:ofType:from:
MorphicEventHandler>>click:fromMorph:
LabelMorph(Morph)>>click:
MouseClickState>>click
MouseClickState>>handleEvent:from:
HandMorph>>handleEvent:
ClyQueryBrowserContext(ClySystemBrowserContext)>>requestSingleMethodTag:suggesting: ClyMethodTagsAndPackageEditorMorph>>requestTag [
		self isExtensionActive 
			ifTrue: [ self requestPackage]
			ifFalse: [ self requestTag ]
	] in ClyMethodTagsAndPackageEditorMorph>>openEditor
BlockClosure>>on:do:
ClyMethodTagsAndPackageEditorMorph>>requestChangeBy:
ClyMethodTagsAndPackageEditorMorph>>openEditor
[target perform: actionSelector withArguments: arguments] in IconicButton(SimpleButtonMorph)>>doButtonAction
BlockClosure>>ensure:
CursorWithMask(Cursor)>>showWhile:
IconicButton(SimpleButtonMorph)>>doButtonAction
IconicButton(SimpleButtonMorph)>>mouseUp:
IconicButton(Morph)>>handleMouseUp:
MouseButtonEvent>>sentTo:
IconicButton(Morph)>>handleEvent:
IconicButton(Morph)>>handleFocusEvent: [
		result := focusHolder handleFocusEvent: 
			(anEvent transformedBy: (focusHolder transformedFrom: self)).
	] in HandMorph>>sendFocusEvent:to:clear:
BlockClosure>>on:do:
WorldMorph>>becomeActiveDuring:
HandMorph>>sendFocusEvent:to:clear:
HandMorph>>sendEvent:focus:clear:
HandMorph>>sendMouseEvent:
HandMorph>>handleEvent:
MouseClickState>>handleEvent:from:
HandMorph>>handleEvent:

Notice that both stacks start from handleEvent:, but one ends up sending a mouseUp: while the other one sends a click: event, and this MouseClickState guy in the middle orchestrating everything with the HandMorph

Some background: how mouse events work

To fix this bug, I had to get my hands dirty and get some details about how mouse events work in Morphic. For those not interested on such details, skip this section, although I think it’s pretty educative ^^.

Morph worlds have hand objects, objects responsible of (among others) dispatching the events to the other morphs in the world.
In each UI cycle, a hand takes the events from a queue of pending OS events, generates Morphic events for the OS event, and dispatches them to the right guy.

Now, the mouse events as they come from the OS have two main flavours: mouseDown and mouseUp. There is no click, nor double click event. They are simulated by the hand.

The process of simulating a click works roughly as follows. When a morph is interested on a click it subscribes to mouse down events. Then, when it receives a mouse down, it has to ask the hand “Hey! I would like clicks!”.
Usually this is done by some code like this:

mouseDown: anEvent
  ...
  anEvent hand waitForClicksOrDrag: h
    event: (anEvent transformedBy: tfm)
    selectors: { nil. nil. nil. #dragTarget:. }
    threshold: 5
  ...

When that happens, the hand changes its state, and stores a MouseClickState object that implements a kind of state machine to detect clicks and double clicks.

Finally, after the mouseDown the morph receives a mouseUp (that should be transformed into a click). The hand, before sending the event to the interested morph, it makes it pass through the state machine which has some code like this:

"Oh, the guy was interested in clicks, and gave me a mouse up following a mouse down in some small enough time interval. Let's click!"
self click.    
aHand handleEvent: firstClickUp.

Where

  1. click will send a click event to the morph and
  2. handleEvent: will send the mouseUp

Fixing the bug

Finding the real fix took more time than the couple of minutes you will spend reading this post. My first fix for it was to invert handleEvent: and click. This fixed the issue in question, but lead to other problems like this one, because now the order of events was altered (and some apps were expecting clicks before mouse ups. I’ll just go to the juicy details.

The problem was caused because during the click event, calypso was opening a modal window. Modal windows do suspend the current execution and do not return the control to the caller until the window is closed. Something like this:

openModal
  self open.
  "Start a sub UI loop that will not return the control to the caller until I'm closed."
  [ self isOpen ] whileTrue: [ self doAnotherUIIteration ].

This meant that the call from click never returned. Thus the mouse up was never dispatched using handleEvent:. And when the sub UI loop starts, the morph with the focus starts receiving new events while the mouse up has never been sent, and the MouseClickState state machine thinks it is still in a click state…

I proposed a solution for this in this pull request. The main idea behind is that the event loop should always dispatch a single event per cycle, like that modal sub-cycles will start with a correct state. So, if like in this case an event (mouse up) generates several events (a click, then a mouseUp), only the first one is dispatched, and the second one is returned to the event queue, for later treatment in a subsequent cycle.

Some conclusion?

Please take care of your bug reports, they are life savers :).

The idea behind comparing executions is super powerful. We have a super smart guy in the team doing a PhD around this topic. This post just shows how to exploit this idea without other tool support than stack access and a log (thisContext stack traceCr).

MouseClickState probably needs more testing and love.

And I hope you had fun ^^

Why can’t Pharo have constructors?

The other day we were talking about Pablo about pre-tenuring objects when discussing about a couple of papers like this one and this other one. These papers describe the idea of pre-tenuring objects for Java. The main idea being to identify allocation sites in the code (e.g., new MyClass(17, true,someObject)), take statistics about the average size and lifetime of the objects created at those allocation sites, and then decide to directly tenure objects (i.e., move them to a region of memory garbage collected less often) at those allocation sites if convenient. Now, garbage collection is actually not the point of this post, but this story serves as a kick-off for it.

So, one of the points of the papers above was to identify allocation sites. This popped a question in my mind:

Hey! How can we identify allocation sites in Pharo?

In Pharo allocations happen on the execution of the method basicNew. The method new calls basicNew to create an instance and then initializes it with the initialize method. Then, in our own classes we create our own constructors creation methods by defining class methods calling new.

Behavior >> basicNew [
  "This is a simplified version for the blog"
  <primitive: 70 error: ec>
  self primitiveFailed
]

Behavior >> new [
  ^ self basicNew initialize
]

Person class >> newInstanceWithName: aName age: anAge [
  self new
    name: aName;
    age: anAge;
    yourself
]

Now, the problem here is that from an execution perspective the real allocation point is the expression self basicNew inside the method new. But that is also the same allocation site for most objects in the entire runtime. If we want to distinguish allocation sites, we need to make them more explicit.

So let’s add constructors

This morning, coffee in hand, baby napping, I decided to give a try at implementing constructors. I wanted to avoid explicitly calling new or basicNew, avoid extra yourself sends (that are always complex to follow for new people). I have seen several people in the past doing similar stuff to avoid having getter/setter methods in their classes. They implemented instance creation methods that received dictionaries for example, used runtime reflection, and/or used long initializeMyObjectWithX:andY:thenZ: messages on the instance side.

I decided I wanted to have syntax sugar on my side :). I wanted to transform my newInstanceWithName: aName age: anAge above into something like this:

Person class >> newInstanceWithName: aName age: anAge [
  name := aName.
  age := anAge
]

And I decided to do it with a compiler plugin.

Compiler Plugins in Pharo

Pharo supports a couple of features that allow us to customize the language in sometimes funny, sometimes useful, and most of the times *very easy* ways. One of those features are compiler plugins.

Compiler plugins in Pharo are objects we can subscribe in the compiler. The compiler will call our plugin in the middle of the compilation, allowing us to insert our own AST (Abstract Syntax Tree) transformations in the compilation chain.

To define our plugin we need to define a subclass of OCCompilerASTPlugin and define the transform method.

OCCompilerASTPlugin subclass: #ConstructorPlugin
	instanceVariableNames: ''
	classVariableNames: ''
	package: 'ConstructorPlugin'

ConstructorPlugin >> transform [
  "We will work in here"
]

Our compiler plugin needs to ensure two things. First, the method we wrote as a constructor needs to be executed on an instance of the class, and not on the class itself. Second, it needs to somehow create the instance and execute that method on our instance. What we want, is that the code we wrote about is automatically (and transparently) translated into:

Person >> newInstanceWithName: aName age: anAge [
  name := aName.
  age := anAge
]

Person class >> newInstanceWithName: aName age: anAge [
  ^ self new newInstanceWithName: aName age: anAge
]

Compiling the code on the instance side

As you see above, we should be able to compile the constructor method directly on the instance side, without much manipulation. In our compiler plugin, we can ask the AST for the class where we are compiling the method, get the instance side, and simply compile it there.

ConstructorPlugin >> transform [
  | classToCompileIn |
  classToCompileIn := ast methodClass.
  classToCompileIn instanceSide compiler compile: ast sourceCode.
]

Note several things on the example above. First, I get to compile a new method by asking the AST its entire source code, this is because we cannot (for now) initiate a compilation from an AST. Second, I’m using the compiler of the class to compile, meaning the method we are compiling will not be installed in the class. This will be useful to hide the generated method as we will see later.

Generating the new real class-side method

Now that we created an instance side method to initialize our instance, we need to create the class side method that will create the instance, and call this method. I’ve decided to not install the instance-side method in the class, to keep generated code hidden (and easy to discard). So what I want to generate is now a method that takes the instance-side method and executes it on a new instance. That is, I want to generate some code like:

Person class >> newInstanceWithName: aName age: anAge [
  ^ self new
    withArgs: { aName . anAge }
    executeMethod: TheInstanceSideMethod
]

Turns out generating code in pharo can be easily done with ASTs too. And since ASTs are our communication with the compiler, we just need to replace the current AST by a new AST reflecting the code we want. We can create an AST for the method above with the following expression, where ast is the original AST:

RBMethodNode
    selector: ast selector
    arguments: ast arguments
    body: (RBSequenceNode statements: { 
        RBReturnNode value: (RBMessageNode
            receiver: (RBMessageNode
                receiver: RBSelfNode new
                selector: #new)
            selector: #'withArgs:executeMethod:'
            arguments: { 
                RBArrayNode statements: ast arguments.
                (RBLiteralValueNode value: hiddenInstanceSideMethod) })
    })

In this expression we create a new method node, with the same selector as before, the same arguments, but with a single statement. This statement is a return node, with a message send: our self new withArgs: { aName . anAge }
executeMethod: TheInstanceSideMethod
. Note also, that we are not hardcoding the arguments aName and anAge anywhere in the method. We are reusing the arguments coming from the original method, so this transformation will work for any constructors.

Putting it together

Finally, we can put the two things together in a final transform method. In this (almost) final version, I’ve added two extra details to our plugin. So far, if we installed it as-is, this plugin would have tried to compile all possible class-side methods in a class. We don’t want that, we want to scope this transformation to constructors only. In this first iteration, I decided to scope the plugin to work on methods that are marked with the <constructor> pragma and are on class-side.

transform
    | classToCompileIn hiddenInstanceSideMethod |
    classToCompileIn := ast methodClass.
    ((ast hasPragmaNamed: #constructor) not or: [ classToCompileIn isInstanceSide ])
        ifTrue: [ ^ ast ].

    hiddenInstanceSideMethod := classToCompileIn instanceSide compiler compile: ast sourceCode.
    ast := RBMethodNode
        selector: ast selector
        arguments: ast arguments
        body: (RBSequenceNode statements: { 
            RBReturnNode value: (RBMessageNode
                receiver: (RBMessageNode
                    receiver: RBSelfNode new
                    selector: #new)
                selector: #'withArgs:executeMethod:'
                arguments: { 
                    RBArrayNode statements: ast arguments.
                    (RBLiteralValueNode value: hiddenInstanceSideMethod) })
        })

Finally, we can install this plugin to work on our class by redefining the classSideCompiler method.

Person class >> classSideCompiler [
  ^ super classSideCompiler addPlugin: ConstructorPlugin
]

A rough remaining detail

If you’ve followed until here and tried the code above in a method marked as <constructor> you probably have noticed a weird behaviour. Depending on how you defined the class where your constructor is, you may have noticed different things. A pop-up asking you for undeclared variables. Or the fact that accepting that pop-up defines the variable on the class-side. This happens because the compiler performs a semantic analysis on the AST before giving the plugins a chance to do their work. In other words, it is analysing our constructor method on the class side, which does not make much sense for us.

To solve this issue, I extended the compiler plugin mechanism to call also the plugins **before** the semantic analysis. So to make this work properly, I made compiler plugins have two hooks: transformPreSemanticAnalysis and transformPostSemanticAnalysis.

The offending code in the compiler now looks like:

self pluginsDo: [ :each |
    ast := each transformPreSemanticAnalysis: ast ].
self doSemanticAnalysis.
self pluginsDo: [ :each |
    ast := each transformPostSemanticAnalysis: ast ].

What’s next?

Implementing this plugin leaves several open doors for improvement in the compiler infrastructure. I’ve made here a list of potential improvements in the compiler.

  • I should push my pre/post semantic analysis hooks to the mainstream compiler, after discussion with the compiler guys 🙂
  • Compiler plugins do only know the AST, but it would be good to get the current compiler too, for example to ask for compilation options. In our case, we had to access the compilation class from the AST, which I find confusing at the least
  • The compiler should accept ASTs as input too, otherwise going back and forth AST -> source -> AST is a waste of time and error prone.
  • Should we have a DSL to create ASTs? Maybe lisp-like macros? They could be implemented as a compiler plugin too

Also, you may have noticed that the syntax-highlighter does not realize that the variables we type are on the instance side, although the code compiles right. I was experimenting with having per-class customisable parsers to solve this issue some time ago. I think this is a nice idea to explore.

What else can we do with compiler plugins? String interpolation, literal objects? Optional arguments? Take your pick.

And what about the pre-tenuring? That’s a story for another day…