Dynamic layouts with Spec2

As you may already know, Spec2 is the new version of the UI framework: Spec. Spec2 is not just a new version but a complete rewrite and redesign of Spec1. Contrary to Spec1, in Spec2 all the layouts are dynamic. It means that you can change on the fly the elements displayed. It is a radical improvement from Spec1 where most of the layout were static and building dynamic widgets was cumbersome.

In this post we will show that presenters can be dynamically composed using layouts. We will show a little interactive section. Then we will build a little code editor with dynamic aspects. Note that In this post, we are going to use simply Spec, to refer to Spec2 when we do not need to stress a difference.

Layouts as simple as objects

Building dynamic applications using Spec is simple. In fact, any layout in Spec is dynamic and composable. For example, let me show you the following code snippet:

"Instantiate a new presenter"
presenter := SpPresenter new.
"Optionally, define an application for the presenter"
presenter application: SpApplication new.

There are three principal layouts in Spec: SpPanedLayout, SpBoxLayout and SpGridLayout. For this presenter we will use the SpPanedLayout, which can receive two presenters (or layouts) and places them in one half of the window.

presenter layout: SpPanedLayout newTopToBottom.
presenter openWithSpec.

Of course, we are going to see an empty window because we did not put anything in the layout.

Empty layout

Now, without closing the window, we can dynamically edit the layout of the main presenter. We will add a button presenter executing the following lines:

presenter layout add: (button1 := presenter newButton).
button1 label: 'I am a button'.
Paned layout with one button

Now, we can add another button. There is no need to close and reopen the window, everything updates dynamically and without the need of rebuilding the window. As we instantiate the layout with newTopToBottom, the presenters will align vertically.

presenter layout add: (button2 := presenter newButton).
button2 label: 'I am another button'.
Paned layout with two buttons

Now, we can put an icon for the first button:

button1 icon: (button1 iconNamed: #smallDoIt).
Paned layout

Or we can delete one of the buttons from the layout:

presenter layout remove: button2.

What we should see here is that all the changes happens simply by creating a new instance of a given layout and sending messages to it. It means that programs can create simply complex logic of the dynamic behavior of a widget.

Building a little dynamic browser

Now, with all of this knowledge, we are going to build a new mini version of the System Browser. We want to have

  • A tree that shows all the system classes.
  • A list that shows all methods in the selected class.
  • A text presenter that show the code of a selected method and a button.
  • Initially the code of the method will be in “Read-Only” mode. When we press the button, we are going to pass to “Edit” mode.

Let us get started. So, first, we need to create a subclass of SpPresenter, called MyMiniBrowserPresenter.

SpPresenter subclass: #MyMiniBrowserPresenter
	instanceVariableNames: 'treeClasses button codeShower methodsList'
	classVariableNames: ''
	package: 'MiniBrowser'

Now, we need to override the initializePresenters method in which we are going to initialize the presenters and the layout of our mini browser.

First we are going to instantiate the tree presenter. We want the tree presenter to show all the classes that are presented in the Pharo image. We know that all subclasses (almost) inherit from Object. So, that is going to be the only root of the tree. To get the subclasses of a class we can send the message subclasses, that is what we need to get the children of a node. We want to each of the nodes (clases) have a nice icon, we can get the icon of a class with the message systemIcon. Finally, we want to “activate” the presenter with only one click instead of two. The code will be:

MyMiniBrowserPresenter >> initializePresenters

    treeClasses := self newTree.
	   roots: Object asOrderedCollection;
	   children: [ :each | each subclasses ];
	   displayIcon: [ :each | each systemIcon ].

For the methods, we want to have a filtering list. That means, a list in which we can search of elements. Also, we want that to display only the selector of the method to the user and sort them in an ascending way.

    methodsFilteringList := self newFilteringList.
    methodsFilteringList display: [ :method | method selector ].
    methodsFilteringList listPresenter
	    sortingBlock: [ :method | method selector ] ascending.

We said that, initially, the code is going to be in “Read-Only” mode. So, the label of the button is going to be “Edit” so say that is we click on the button we will change to edition mode. Also we want to have a nice icon.

   button := self newButton.
	  label: 'Edit';
	  icon: (self iconNamed: #smallConfiguration).

As the initial behaviour will be read-only mode, the code shower will be only a text presenter that is not editable.

   codeShower := self newText.
   codeShower beNotEditable.

And finally we want to intialize the layout of our presenter.

self initializeLayout

Here the complete code of the method is:

MyMiniBrowserPresenter >> initializePresenters

	treeClasses := self newTree.
		roots: Object asOrderedCollection;
		children: [ :each | each subclasses ];
		displayIcon: [ :each | each systemIcon ].

	methodsFilteringList := self newFilteringList.
	methodsFilteringList display: [ :method | method selector ].
	methodsFilteringList listPresenter
		sortingBlock: [ :method | method selector ] ascending.

	button := self newButton.
		label: 'Edit';
		icon: (self iconNamed: #smallConfiguration).

	codeShower := self newText.
	codeShower beNotEditable.

	self initializeLayout

Placing elements visually

We want in the upper part of the layout to have the classes and the methods shown in a horizontal way, like in the System Browser (a.k.a. Calypso). So, what we will do is to create another left to right layout, with an spacing of 10 pixels, the classes and the methods.

Then, we will add that layout to our main layout. the main layout is going to be a top to bottom layout. After, we want the code shower and then the button. We do not want the code to expand and also we want a separarion of 5 pixels for this layout.

MyMiniBrowserPresenter >> initializeLayout

	| classesAndMethodsLayout |
	classesAndMethodsLayout := SpBoxLayout newLeftToRight.
		spacing: 10;
		add: treeClasses;
		add: methodsFilteringList.
	self layout: (SpBoxLayout newTopToBottom
		spacing: 5;
		add: classesAndMethodsLayout;
		add: codeShower;
		add: button expand: false;

So far, so good… but we did not add any behaviour to the presenters. To do that we can either do it in the initializePresenters method of override the connectPresenters method. To clearly separate the intention of the methods, we favor overriding connectPresenters.

Connecting the flow

When we click on a class of the tree, we want to update the items of the methods list with the methods of the selected class. When we click on a method, we want to update the text of the code shower with the source code of the method.

MyMiniBrowserPresenter >> connectPresenters

	treeClasses whenActivatedDo: [ :selection | 
		methodsFilteringList items: selection selectedItem methods ].
	methodsFilteringList listPresenter
		whenSelectedDo: [ :selectedMethod | 
			codeShower text: selectedMethod ast formattedCode ].
	button action: [ self buttonAction ]

When we click on the button we want several things. That is why it is better to create a separated method. First, we want to change to label to the button to alternate between “Edit” and “Read-Only”. Then, we want to change the presenter of the code shower. If the Mini Browser is on read only mode we want to have a text presenter that is not editable. And if the Mini Browser is on edit mode we want to have a code presenter that highlights the code and show the number of lines of code. But always the code shower is going to have the same text (the code of the methods).

MyMiniBrowserPresenter >> buttonAction

	| newShower |
	button label = 'Edit'
		ifTrue: [ 
			button label: 'Read only'.
			newShower := self newCode ]
		ifFalse: [ 
			button label: 'Edit'.
			newShower := self newText beNotEditable ]

	newShower text: methodsFilteringList selectedItem ast formattedCode.

	self layout replace: codeShower with: newShower.
	codeShower := newShower

As a last detail, because we love details, we do not want the “Untitled window” as the window title and also we want a default extent. We override initializeWindow:method.

MyMiniBrowserPresenter >> initializeWindow: aWindowPresenter

		title: 'My Mini Browser';
		initialExtent: 750 @ 650

Voilà! We have a new version minimal version of the System Browser. If we run MyMiniBrowserPresenter new openWithSpec.

Mini Browser on Read-Only mode
Mini Browser on Edit mode

With Spec we can build from simple applications to very sophisticated ones. The dynamic properties are simply nice. Spec has lots of presenters that are ready to be used. Start digging into the code to see with presenters are available, what it is their API and start experimenting and playing! Layouts can be configured in multiple ways, so have a look at their classes and the example available.

Sebastian Jordan-Montano

Debugging the JIT compiler Hotspot detection for ARM64

The other day we were working on the compiler detection of hotspots, originally implemented by Clément Béra during his PhD thesis. In Sista, hotspot detection is implemented as a countdown that looks like the following: the counter is loaded in a register, decremented and then a jump if carry detects if the substraction underflowed.

self MoveA32: counterAddress R: counterReg.
self SubCq: 16r10000 R: counterReg. "Count executed"
countTripped := self JumpCarry: 0.

We were building some unit tests for this functionality, and we were interested at first at seeing how counters increment/decrement when methods execute. We wrote a couple dozen of tests for different cases (because the code for the counters is a bit more complicated, but that’s for another day). The code of one of our tests looked like the following: we compile a method, we execute it on a machine code simulator, then we verify that the counter was effectively incremented (because the public API is in terms of positive counts and not count-downs):

    | nativeMethod counterData jumpCounter |
    nativeMethod := self jitMethod: (self class>>#methodWithAndAndJump:).
        callCogMethod: nativeMethod
        receiver: memory nilObject
        arguments: { memory trueObject }
        returnAddress: callerAddress.

    counterData := interpreter picDataFor: nativeMethod.
    jumpCounter := memory fetchPointer: 1 ofObject: counterData.
        assert: (memory integerValueOf: (memory fetchPointer: 1 ofObject: jumpCounter))
        equals: 1

There was however something fishy about the ARM64 version. In addition of incrementing the counter, the code was taking the carry jump! Which lead our test to fail…

Doing some machine code debugging

So everything was working OK on intel (IA32, X64) but not on ARM (neither 32 or 64 bits). In both ARM versions the jump was __incorrectly__ taken. I first checked the instruction was being correctly assembled. And since that seemed ok, I went on digging in our machine code debugger. I found the corresponding instruction, set the instruction pointer in there and started playing with register values to see what was happening.

As you can see in the screenshot, the code is being compiled into subs x25, x25, x16, which you can read as x25 := x25 - x16. So I started playing with the values of those two registers and the carry flag, which is the flag that activates our jump carry. The first test I did was to check 2 - 1.

self carry: false.
self x25: 2.
self x16: 1.

Substraction was correct, leaving the correct result in x25, but the carry flag was set! That was odd. So I tested a second thing: 0 - 1.

self carry: false.
self x25: 0.
self x16: 1.

In this case, the carry flag was not set, but the negative was set. Which was even more odd. The case that should set carry was not setting it, and vice-versa. It seemed it was inverted! I did a final test just to confirm my assumption: 1-1 should set both the negative and carry flags if the carry flag was inverted.

self carry: false.
self x25: 0.
self x16: 1.

ARM Carry is indeed strange

I was puzzled for a moment, and then I got to look for a culprit: was our assembler that was doing something wrong? was it a bug in Unicorn, our machine code simulator? or was is something else?

After digging for some time I came to find something interesting in the ARM documentation:

For a subtraction, including the comparison instruction CMP and the negate instructions NEGS and NGCS, C is set to 0 if the subtraction produced a borrow (that is, an unsigned underflow), and to 1 otherwise.


And something similar in a stack overflow post:

ARM uses an inverted carry flag for borrow (i.e. subtraction). That’s why the carry is set whenever there is no borrow and clear whenever there is. This design decision makes building an ALU slightly simpler which is why some CPUs do it.


It seems that the carry flag in ARM is set if there is no borrow, so it is indeed inverted! But it is only inverted for substractions!

Extending the Compiler with this

Since carry works different in different architectures, but only for substractions, I created a new instruction factory method JumpSubstractionCarry: that detects carry for substractions and is supposed to be platform specific. Then I replaced the code of the counter by the following:

self MoveA32: counterAddress R: counterReg.
self SubCq: 16r10000 R: counterReg.
countTripped := self JumpSubstractionCarry: 0.

JumpSubstractionCarry: is backend defined:

JumpSubstractionCarry: jumpTarget
    backEnd genJumpSubstractionCarry: jumpTarget

the default implementation just delegates to the original factory method:

genJumpSubstractionCarry: jumpTarget
    ^cogit JumpCarry: jumpTarget

and the ARM (both 64 and 32 bits) do use a jump if no carry instead!

genJumpSubstractionCarry: jumpTarget
    ^cogit JumpNoCarry: jumpTarget

With these, our tests went green for all platforms 🙂

If you want to see the entire related code, you can check the following WIP: https://github.com/pharo-project/opensmalltalk-vm/compare/headless…guillep:sista?expand=1

Advanced stepping and custom debugging actions in the new Pharo debugger

In this article, we describe the new advanced stepping menu available in the debugger. These advanced steps provide convenient and automated debugging actions to help you debug your programs. We will see how to build and add new advanced steps, and how to extend the debugger toolbar with your own customized debugging menus.

Advanced steps

Have you noticed the bomb in the debugger toolbar? These are the advanced steps! These steps provide you with usefull and convenient debugging actions. Basically, they automatically step the execution until a given condition is satisfied. When it is the case, the execution is interrupted and its current state is shown in the debugger. In the following, we will describe these advanced steps and implement and integrate a new advanced step that skips the current expression.

Advanced steps menu in the debugger tool bar

What do these advanced steps do?

These advanced steps are a bit experimental (notice the bomb!). This means they can sometimes be a bit buggy, and in that case you should open an issue to report the bug. However most of the time, they do the job. Some of them have a failsafe that stops to give feedback to developers, in order to avoid an infinite stepping due to the impossibility to meet the expected conditions. For now, that failsafe limits the automatic stepping to 1000 steps before notifying the developer and asking if she wants to continue. The current advanced steps and what they do are describe below.

Steps until any instance of the same class of the current receiver receives a message. For instance, the receiver is a point executing the extent message. This command will step until another point receives a message.

Steps until the current receiver receives a message. For example, the next time a visitor is called back from a visited object.

Steps until the execution enters a new method. Stops just after entering that new method.

Steps the execution until the next object creation, that is, the next class instantiation.

Steps the execution until the current method is about to return. Stops just before returning.

Building a new advanced step: skipping expressions

In the following, we build a new advanced step to demonstrate how you can easily add new debugging commands to the advanced step menu.

Building the command class

First, we must create your class as a subclass of SindarinCommand. The SindarinCommand class provides small facilities to build debugger commands, such as accessors to the debugger API and to the debugger UI.

SindarinCommand subclass: #SindarinSkipCommand
    instanceVariableNames: ''
    classVariableNames: ''
    package: 'NewTools-Sindarin-Commands'

Second, we must write three class methods to configure the command: you have to provide an icon name, a description and a name. The defaultName method also contains the pragma <codeExtensionDebugCommand: 50>: this pragma is how the debugger automatically finds the command to display it in the advanced steps menu. The parameter of the pragma is the order of appearance of the menu action (we will not bother with it in this tutorial).

SindarinSkipCommand class>>defaultIconName
SindarinSkipCommand class>>defaultDescription
	^ 'Skips the current expression'
SindarinSkipCommand class>>defaultName
	<codeExtensionDebugCommand: 50>
	^ 'Skip'

If we open a new debugger, we see now that a new advanced step is available: the skip debugging action.

The new “Skip” advanced step automatically appeared in the menu.

Building the skip action

Now that we have our menu button, we need to write what it does! We must write the execute method in the SindarinSkipCommand. This method is the one called every time you click on an advanced step button.

Ideally, commands should not contains the logic of the debugging code because it requires to access and modify elements from the debugger UI (or debugger presenter) and to access and control the debugging model. This is not always possible (everything is not accessible from outside the debugger) and this also leads to complex and hard to test code in those execute methods.

That is why we provide an access to the debugger UI through the debuggerPresenter accessor, and that we only call its API in those execute commands. In our implementation below, we call the skipCurrentExpression API that implements the skipping behavior. We do not show this implementation here as our focus is the adding of new advanced steps. In addition, we prefer to create the skipCurrentExpression API as an extension method of the debugger presenter and located in the same package as our skip command class.

	self debuggerPresenter skipCurrentExpression

Experimenting our new skip action

We see a demonstration of this new advanced step in the video below. Notice that everything is not possible: at the end, the debugger refuses to skip the return of the method.

Additionally, skipping code is a sensible operation. It can lead to an inconsistent program state, and you must use it with caution. Remember: there is a bomb in the menu 🙂

How to build your own debugger action by extending the debugger action bar

Extending the toolbar of the debugger with your own menu and commands is fairly easy. You can do it in a few steps, that we describe below.

First, we need to create an extension method of the debugger that will be automatically called by the Spec command building mechanics. This methods takes two parameters: stDebuggerInstance as the debugger instance requesting to build commands, and rootCommandGroup, the default command tree built by that debugger instance. The first instruction of this extension method is the <extensionCommands> pragma. Spec uses this pragma to find all methods extending the command tree of a given presenter (here the debugger) to automatically build extensions.

This method starts like this:
StDebugger>>buildMyExtentionMenuWith: stDebuggerInstance forRoot: rootCommandGroup

Now, let us assume that you built a set of commands, that we refer to as yourCommandClasses in the following. We instantiate all your commands and store them into a commands temporary variable. Each time, we pass the debugger instance to the instantiated command. All these commands can then obtain a reference to the debugger by executing self context, which returns the debugger, and use its API.

commands := yourCommandClasses collect: [:class | class forSpecContext: stDebuggerInstance ].

The next step is to obtain the toolbar command tree from the debugger command tree. This tree contains all the default commands of the debugger, that we want to extend:
toolbarGroup := rootCommandGroup / StDebuggerToolbarCommandTreeBuilder groupName.

Then, we build our own command group and we add this group to the toolbar. The following code configures that new group as a menu button that opens with a popover (as for advanced steps described above):
yourToolbarGroup := CmCommandGroup forSpec
name: 'Advanced Step';
icon: (stDebuggerInstance application iconNamed: #smallExpert);

toolbarGroup register: yourToolbarGroup.

Finally, we register our commands to our new command group, which will make then available in the debugger toolbar:
commands do: [ :c | yourToolbarGroup register: c ].

The full method looks like this:

StDebugger>>buildMyExtentionMenuWith: stDebuggerInstance forRoot: rootCommandGroup 
commands := yourCommandClasses collect: [:class | class forSpecContext: stDebuggerInstance].
yourToolbarGroup := CmCommandGroup forSpec
    name: 'Advanced Step';
    icon: (stDebuggerInstance application iconNamed:     #smallExpert);
toolbarGroup register: yourToolbarGroup.
commands do: [ :c | toolbarSindarinGroup register: c ].


We have seen the advanced steps, what they do, and how we can build and add new advance steps. We have then see how to extend the debugger toolbar with our own customized debugger actions.

Now, you have more power over your debugger, and you can use it to build awesome debugging tools suited to your own problems!

Installing Pharo in Linux using the System Package Manager

One of the improvements that we are including in Pharo 9 is the update of the build process in OpenBuildService.

This service allows us to produce packages for different distributions of Linux. These pacakges are built using the versions loaded in the distribution and they can be installed and updated using the tools present in the system.

Currently we have support for the following set of distributions and architectures, but more are coming.

  • Debian: 9.0 / 10.0 (X86_64)
  • Ubuntu: 18.04 / 20.04 / 20.10 (X86_64, ARM v8 (64 bits))
  • Raspbian: 9.0 / 10.0 (X86_64, ARM v7(32 bits), (X86_64, ARM v7))
  • Arch (X86_64)

We still have the version as latest, later they will be pass to stable as soon as we release Pharo 9.

Updated instructions for installing are found here:


A Taste of Ephemerons

For a couple of years now, Pharo includes support for Ephemerons, originally introduced with the Spur memory manager written by Eliot Miranda. For the upcoming Pharo 9.0 release, we have stressed the implementation (with several hundred thousands Ephemerons), make it compatible with latest changes done in the old space compaction algorithm, and made it a tiny bit more robust. In other words, from Pharo 9 and on, Ephemerons will be part of the Pharo family for real, and we will work on Pharo 10 to have a nice standard library support for it. For now, the improvements are available only in the latest night build of the VM, waiting to be promoted as stable.

Still, you would be scratching your head at “what the **** are ephemerons?”. The rest of this post will give a taste of them.

What are Ephemerons?

An ephemeron is a data structure that gives some notification when an object is garbage collected, invented by Barry Hayes and published in 1997 in OOPSLA in a paper named “Ephemerons: A New Finalization Mechanism”. This mechanism is particularly useful when working, for example, with external resources such as files or sockets.

To be concrete, imagine you open a file, which yields an object having a reference to a system’s file descriptor. You read and write from it, and when you’re done, you close it. Closing the file closes the file descriptor and returns the ressource to the OS. You really want your file to be closed, otherwise nasty stuff may happen, because your OS will limit the number of files you can open.

Sometimes however, applications do not always have such a straight and simple control flow. Let’s imagine the following, not necessarily realistic, arguably not well designed, but very illustrative case: Sometimes you open a file, you pass your file as argument to some library, and… now the library owns a reference to your file. So maybe you don’t want to close it yet. And the library may not want to close it either because you are the real owner of the file!

Another possibility is to let the file be. And make sure that when the object is not used anymore and garbage collected, we close its file descriptor. An Ephemeron does exactly that! It allows us to know when an object is collected, and gives us the possibility to “finalize” it.

You can test it doing (using the latest VM!):

Object subclass: #MyAnnouncingFinalizer
    instanceVariableNames: ''
    classVariableNames: ''
    package: 'MyEphemeronTest'

MyAnnouncingFinalizer >> finalize [
    self inform: 'gone!'

obj := MyAnnouncingFinalizer new.

e := Ephemeron new.
e key: obj.

obj := nil

You will see that after nilling the variable obj, the Ephemeron will react and send the finalize message to our MyAnnouncingFinalizer object.

What about weak objects?

Historically Pharo also supports weak objects, and another finalization mechanism for them.
A weak object is a special kind of object whose references are weak.
And to say it informally, a weak reference is an object reference that is not taken seriously by the garbage collector. If the garbage collector finds that an object is only referenced by weak references, it will collect it, and replace all those weak references by a “tombstone” (which tends to be nil in many implementations).

Historically, we have used the weak mechanism for finalization in Pharo, which can be used like this:

obj := MyAnnouncingFinalizer new.
weakArray := WeakArray new: 1.
weakArray at: 1 put: obj.
WeakRegistry default add: obj.

Here, the weak array object will have a weak reference to our object, and the obj reference in the playground will be a strong reference. As soon as we nil the playground reference, the object will be detected for finalization and it will execute the finalize method too. Moreover, if we check our weak array, we will see our tombstone there in place of the original object.

obj := nil.

weakArray at: 1.

Why not using this weak finalization instead of the ephemeron one?
The main explanation is performance. With the weak finalization process, every time the VM detects an object needs to be finalized, it raises an event. Then, the weak finalization library will iterate all elements in the registry checking what elements need to be finalized, by looking for the presence of tombstones. This means that for each weak object the weak finalization must do a full scan of all possible registered weaklings!

The ephemeron mechanism is more direct: when the VM detects an ephemeron needs to be finalized, it will push the ephemeron to a queue, and raise an event. Then, the ephemeron finalization will empty the queue and finalize them. No need to check all existing ephemerons.

A Weak Pharo Story, Memory Leaks and More

Of course, ephemerons are not only necessary for efficiency. They help also avoid many nasty memory leaks. A couple of years ago we did with Pavel a presentation in ESUG about a very concrete memory leak caused by mis-usage of weak objects. It’s a fun story to tell with enough perspective, but it was not a fun bug to track down at the time 😛 .

And even more, a robust ephemeron implementation will help us remove all the (potential buggy and inefficient) weak finalization code in Pharo 10!

Debugger Extensions in Pharo 9.0

Did you ever want to get the value of an expression each time you navigate the debugger stack? This is what we will show in this tutorial: you’ll learn how to implement a new extension for the StDebugger in Pharo9.0: the ‘Expression Evaluator StDebugger Extension’ (or simply EvaluatorDebugger for short) – that allows the evaluation and inspection of an arbitrary expression in any of the available contexts.

Navigation links

I. Introduction: Outline of the debugging events and debugger extensions.
II. Tutorial Part I: Creating a basic empty debugger extension.
III. Tutorial Part II: Implementing an Expression Evaluator Debugger Extension.
The finished code can be found in its repository

I. Introduction

Debugging in Pharo 9.0

Whenever you debug something in Pharo, this is what happens.

  1. Pharo choses an appropriate debugger.
    It’ll be the StDebugger in most scenarios.
  2. Once a debugger is chosen by the runtime, several things happen:
    • The UI object for the chosen debugger is instantiated (StDebugger).
    • The Debug Process is started and a DebugSession object is instantiated.
    • An action model object (StDebuggerActionModel) is instantiated.
    • The associated extensions for the debugger are loaded.

The general idea is that the debugger (UI) interacts with the action model. The action model is an interface to the debug session, which owns and works over a debug process.

To understand a little bit better, here is a little explanation of each one of the actors (objects) relevant for creating a debugger extension.

I.1 StDebugger (The debugger UI)

The StDebugger class inherits from SpPresenter and acts as the main UI for the debugger. It owns the following objects:

  • DebugSession.
  • StDebuggerActionModel

The UI object it’s usually designed to allow the usage of the functionalities exposed by the ActionModel object.

I.2 DebugSession

The DebugSession models a debugging session by holding its state and providing a basic API to perform elemental debugging operations. It allows one, among others actions, to:

  • Step the execution.
  • Manipulation of Contexts.

It owns the Debug Process.

I.3 Debug Process

It’s the process that the DebugSession will work upon. It runs the debugged execution. It’s owned by the DebugSession object.

I.4 ActionModel

Your debugger extension logic should not be implemented directly in the presenter (the UI). To separate responsibilities, we code an ActionModel object, which will implement the complex execution behavior based on the Debug Session.

StDebuggerActionModel exposes all the functions that are available for the StDebugger.

I.5 StDebugger Extensions

When the StDebugger is initialized, it loads all its extensions. A debugger extension is minimally composed by the following:

  1. A presenter object (UI).
    The UI, a subclass of SpPresenter, that allows the user the make use of the extension capabilities. Note that having just a presenter is not enough. For this object to be recognized and listed as a Debugger Extension, the class must use the Trait: TStDebuggerExtension.
  2. An ActionModel object.
    This is the object that implements and exposes all its special debugging functionalities. It’s normally owned by the extension presenter.

In this tutorial, you will create a Presenter and an ActionModel for your debugger extension.

II. Tutorial – Part I

In “Tutorial Part I”, you will develop a minimal (blank, no tests, no accessors, no debugging features yet) implementation of a debugger extension, that is integrated with the Debugger Extensions system in Pharo 9.

An blank Debugger Extension is composed by an UI and an ActionModel object .

Adding a new debugger extension for the StDebugger

The first step towards a fully featured new extension is to create an empty one that is integrated with the debugger extensions system of Pharo 9, and it’s shown among the other extensions.

You will learn how to implement the following:

  • The extension UI (SpPresenter Subclass, with TStDebuggerExtension trait).
  • The Extension Action Model (Your actual Debugger Extension object that exposes its functionalities).
The finished debugger extension

Implementing the basics for debugger extensions

The Action Model

In your package of choice (Here called ‘EvaluatorDebugger-Base’), define a class for your debugger extension model. Name it “EvaluatorDebugger”. There are no constraints related to the design, but it is a good idea to hold a reference, to the StDebugger, the DebugSession, or whatever your extension might need.

Note: You are not developing a Debugger, but a Debugger Extension. Nonetheless, we call it EvaluatorDebugger for simplicity reasons.

Object subclass: #EvaluatorDebugger
   instanceVariableNames: 'stDebugger'
   classVariableNames: ''
   package: 'EvaluatorDebugger-Base'

Add also the accessors.

EvaluatorDebugger >> stDebugger
   ^ stDebugger

EvaluatorDebugger >> stDebugger: anObject
   stDebugger := anObject
The Extension UI

In your package of choice, create a subclass of SpPresenter that uses the trait TStDebuggerExtension. Name it “EvaluatorDebuggerPresenter”. Also, add an instance variable to hold your Action Model:

SpPresenter subclass: #EvaluatorDebuggerPresenter
   uses: TStDebuggerExtension
   instanceVariableNames: 'evaluatorDebugger'
   classVariableNames: ''
   package: 'EvaluatorDebugger-Base'

Remember to implement the trait methods; in particular, put a name to your extension.

EvaluatorDebuggerPresenter >> debuggerExtensionToolName
   ^ 'Evaluator Debugger' 

The Extensions system relies on certain methods to be implemented in your UI object to have a functional extension. Implement the following:

EvaluatorDebuggerPresenter >> setModelBeforeInitialization: aStDebugger
   "This method is called when the StDebugger initializes its extensions.
   We initialize our model (the debugger extension) with a reference to the stDebugger."
   evaluatorDebugger := EvaluatorDebugger new.
   evaluatorDebugger stDebugger: aStDebugger

EvaluatorDebuggerPresenter >> initializePresenters
   "Called automatically by the Spec framework. This method describes how the widgets are initialized"
   "There are no widget for the moment."

EvaluatorDebuggerPresenter >> updatePresenter
   "Called automatically when the debugger updates its state after stepping"
   "Your widgets should be updated here."
   super updatePresenter

And in the class side, your presenter needs the following method to be implemented:

EvaluatorDebuggerPresenter class >> defaultSpec
"An empty vertical box layout, for the moment"
    ^ SpBoxLayout newVertical

So far, you have an empty debugger extension. It doesn’t do anything yet.

Next, you’ll make it appear among the other extensions.

Activate the debugger extensions in Pharo

By default, there are no debugger extension being shown.
How to see your new extension?

So far, you have a functional empty debugger extension. For it to be visible and available in the StDebugger, you need to enable the Debugger Extensions in the Pharo Settings. This is how:

  1. Go to the Pharo Settings.
  2. Navigate to Tools > Debugging > Debugger Extensions and check the option Activate Extensions…
  3. Expand Activate Extensions… and find your extension (Evaluator Debugger) check the option Show in Debugger.

Additionally When developing debugger extensions, it is recommended to enable the option to Handle Debugger Errors, like in the last picture.

By default, if your debugger extension throws an error, it will be ignored and the StDebugger will not load the extension. This means that you can’t debug your extension code directly in case of failure. By enabling Handle Debugger Errors, whenever an error is thrown in your extension, a new StDebugger(without extensions) instance will be launched so you can debug it.

For this, navigate and check the option: Tools > Debugging > Handle Debugger Errors.

From now on, whenever you debug something, your extension should appear in the top-right pane.

Your empty debugger extension, shown in the StDebugger

III. Tutorial – Part II

Implementing the Expression Evaluator StDebugger Extension

Remember: For readability purposes, in this tutorial the extension is called simply by “EvaluatorDebugger” instead of its full connotation: “Expression Evaluator StDebugger Extension”

Now you’ll add your extension functionalities. For this, you will:

  1. Implement the logic of your debugger extension (Implement the ActionModel – EvaluatorDebugger – methods).
  2. Implement an object subclass of SpCodeScriptingInteractionModel – EvaluatorDebuggerCodeInteractionModel – needed for the expression-evaluation-in-context logic.
  3. Finish the UI (EvaluatorDebuggerPresenter) layout and widgets.

During the Tutorial – Part I, we developed an Action Model without any behavior – The EvaluatorDebugger. This time, we will complete the class with the intended logic by adding a method that allows the evaluation of an expression in a given Context.

Add a new method: #evaluateInCurrentContextExpression:withRequestor:

EvaluatorDebugger >> evaluateInCurrentContextExpression: aStream withRequestor: anObject
   "Evaluates the expression coming from a stream. Uses the current context of the StDebugger"
   | context |
   context := stDebugger currentContext.
   ^ context receiver class compiler
        source: aStream;
        context: context;
        receiver: context receiver;
        requestor: anObject;
        failBlock: [ nil ];

Your extension UI will feature a SpCodePresenter, where the user can type an expression which is evaluated in the selected context of the StDebugger.

Your code presenter should consider the current selected context to correctly work with your code (Syntax highlighting, inspection, etc), and for this you need to implement a subclass of SpCodeScriptingInteractionModel as follows.

SpCodeScriptingInteractionModel subclass: #EvaluatorDebuggerCodeInteractionModel
   instanceVariableNames: 'context'
   classVariableNames: ''
   package: 'EvaluatorDebugger-Base'
EvaluatorDebuggerCodeInteractionModel >> bindingOf: aString
   ^ (context lookupVar: aString) ifNotNil: [ :var | 
        var asDoItVariableFrom: context ]

EvaluatorDebuggerCodeInteractionModel >> context
   ^ context

EvaluatorDebuggerCodeInteractionModel >> context: anObject
   context := anObject

EvaluatorDebuggerCodeInteractionModel >> hasBindingOf: aString
   ^ (context lookupVar: aString) notNil

Finally, define the layout and behavior of the extension UI.

UI Layout

The object is a Spec-based presenter. Design a neat and practical interface!

Modify the EvaluatorDebuggerPresenter class. Add the instance variables for the widgets.

SpPresenter subclass: #EvaluatorDebuggerPresenter
   uses: TStDebuggerExtension
   instanceVariableNames: 'toolbar code inspector valueLabel evaluatorDebugger'
   classVariableNames: ''
   package: 'EvaluatorDebugger-Base'

Define the layout in the class side as follows.

EvaluatorDebuggerPresenter class >> defaultSpec
^ SpBoxLayout newVertical
      add: #toolbar expand: false fill: false padding: 0;
      add: #code;
      add: 'Expression Value' expand: false fill: false padding: 5;
      add: #valueLabel expand: false fill: false padding: 5;
      add: #inspector;

Implement the following instance-side methods.

EvaluatorDebuggerPresenter >> initializeCode
   "We define the extensions Code presenter initialization here"
   code := self newCode.
   code interactionModel: EvaluatorDebuggerCodeInteractionModel new.
   code syntaxHighlight: true.
   code text: '"put your expression here"'

EvaluatorDebuggerPresenter >> initializePresenters
   "Called by the Spec framework. This method describes how the widgets are initialized"
   self initializeToolbar.
   self initializeCode.
   valueLabel := self newLabel.
   valueLabel label: 'Write an expression first'.
   inspector := nil inspectionRaw.
   inspector owner: self.

   "when changing the selected context in the stDebugger stackTable, re-evaluate the expression in that context"
   evaluatorDebugger stDebugger stackTable selection whenChangedDo: [ 
      self updatePresenter ].
   self updatePresenter

EvaluatorDebuggerPresenter >> initializeToolbar
   toolbar := self newToolbar
                 addItem: (self newToolbarButton
                        icon: (self application iconNamed: #smallDoIt);
                        action: [ self updatePresenter ];


‘updatePresenter’ is meant to be automatically called when the debugger updates its state after stepping, and changing the context in the stack. However, the current version of Pharo9.0 – at the date of writing 2021/02/16 – doesn’t perform the update after changing the selected context. To fix this, we used the following “hacky” code:

evaluatorDebugger stDebugger stackTable selection whenChangedDo: [
self updatePresenter ]

and add an updatePresenter call to the table selection change callbacks collection in the method initializePresenters above.


The user will write expressions and “press buttons/click things” in the debugger and your extension, expecting something to happen. Also, the StDebugger might issue an “updatePresenter” call to all the extensions. You need to code that.

Add an accessor to directly expose the current context.

EvaluatorDebuggerPresenter >> currentStDebuggerContext
   "A 'shortcut' to get the same currentContext of the StDebugger"
   ^ evaluatorDebugger stDebugger currentContext

Remember that whenever the StDebbuger updates its state, it will automatically call updatePresenter for each of the extensions. We want the code presenter to reflect that, and also the displayed expression value.

EvaluatorDebuggerPresenter >> updatePresenter
   "Called automatically when the debugger updates its state after stepping"
   self updateCode.
   self updateExpressionValueDisplayed.
   super updatePresenter
EvaluatorDebuggerPresenter >> updateCode
   "Sets the context of our debugger-extension code presenter to be the same one of the StDebugger"
   code interactionModel context: self currentStDebuggerContext

EvaluatorDebuggerPresenter >> updateExpressionValueDisplayed
   "Evaluate the expression, in the code presenter, using the appropriate context (the current one of the stDebgger). Then update the ui to show and inspect the obtained value, or a potential exception."
   | expressionBlock expressionResult errorFlag errorMessage |
   expressionBlock := [ 
                         code text readStream
                         withRequestor: code interactionModel ].
   errorFlag := false.
   expressionResult := expressionBlock
                          on: Exception
                          do: [ :e | 
                             errorFlag := true.
                             errorMessage := e description.
                             e ].
   "The inspector shows the result object in case of success, or the Exception otherwise"
   inspector model: expressionResult.
   valueLabel label: (errorFlag
          ifTrue: [ errorMessage ]
          ifFalse: [ expressionResult asString ])

Try it!

Now you have a fully functional debugger-extension. Try debugging some code!

Example code to be debugged:


myCollection:= OrderedCollection new.
myCollection add: 1.
myCollection add: 2.
myCollection add: 3.
myCollection add: 4.

Object assert: myCollection size == 3
  1. Debug some code (cmd+D in the Playground).
  2. Write an expression in your extension’s code presenter (try the code in the image below, if following the example).
  3. Select different Contexts in the stDebugger and see what happens!


The Pharo 9.0 debugger extensions system allows one to conveniently add new ones. The example developed in the tutorial explores all the basic aspects needed to have a functional extension completely integrated with the environment. Should you need to create a new one, or modify and existing one, now you have the knowledge.

Running Pharo 9 in Docker

Docker is an excellent tool to manage containers, and execute applications on them. This is not a discovery!! The idea of this post is to show how easy and simple is to have a Pharo 9 Seaside application running in Docker.

Initial Considerations

This post is based in the excellent Docker image written by the Buenos Aires Smalltalk group (https://github.com/ba-st). They maintain a repository with configurations for different Docker images from Pharo 6.1 to Pharo 9.

You can check the repository in https://github.com/ba-st/docker-pharo

These images are also available from Dockerhub (https://hub.docker.com/r/basmalltalk/pharo), so you can choose also to download the images ready from there. As we are going to do in this small example.

Also, to complete this example we are using an existing Seaside application. We are using the example of TinyBlog. This is an excellent tool to learn Seaside, Voyage and Pharo in general. It is available here.

We are using the latest stage of the project, that is hosted in https://github.com/LucFabresse/TinyBlog

Creating a Docker Image for our Application

In order to start a container with our application, we need to create an image with all the requirements installed and built. Once we have it, it is possible to start one or more instances of this application.

For doing so, we are going to start from the image from basmalltalk/pharo:9.0-image.

We need to pull this image from Dockerhub so it is available for us to use, we execute so:

docker pull basmalltalk/pharo:9.0-image

Once we have the initial image, we need to give a Dockerfile with the recipe to build our application image. The downloaded image already come with a Pharo9 VM and image. We need to perform the following steps on this image:

  • Install our application with all the dependencies using Metacello
  • Generate the initial test data of the application
  • Define an entry point that will execute the Zinc server
  • Expose the Zinc server port so it can be used outside the container

For doing so, we are going to create a file called Dockerfile with the following content:

FROM basmalltalk/pharo:9.0-image
RUN ./pharo Pharo.image eval --save "Metacello new \
baseline:'TinyBlog'; \
repository: 'github://LucFabresse/TinyBlog/src'; \
onConflict: [ :ex | ex useLoaded ]; \
RUN ./pharo Pharo.image eval --save "TBBlog reset ; createDemoPosts"
EXPOSE 8080/tcp
CMD ./pharo Pharo.image eval --no-quit "ZnZincServerAdaptor startOn: 8080"

Once we have a Dockerfile stored, you can put wherever you like it. It is time to build an image using it. We need to be in the directory next to the Dockerfile and execute:

docker build -t pharo/tinyblog:latest .

This will create a Docker Image using the description of the Dockerfile in the current directory, and the new image will be called pharo/tinyblog with a tag marking it as latest.

Once the process is finished, if we list the images with

docker images

We get:

pharo/tinyblog latest fee45c26e604 56 minutes ago 727MB

Executing Our Application

Once we have an image of our application, it is possible to execute this image as one or more containers. We are going to execute a container with the image, and we are going to redirect the port 8080 to the outside; so we can access it.

For doing so, we execute:

docker run -d -P pharo/tinyblog

This will execute our image pharo/tinyblog in detached mode (-d), so it will run in the background, and publishing all ports to the outside (-P). The command will return the ID of the container.

This is a really simple example of running an application, as this is not a Docker tutorial we are only to show a little simple example.

If we check the running containers with:

docker ps

We can see the information about the running containers

CONTAINER ID IMAGE          COMMAND                CREATED STATUS            PORTS                   NAMES
3191540dbeb3 pharo/tinyblog "/bin/sh -c './pharo…" 44 minutes ago Up 44 minutes>8080/tcp fervent_goldstine

We can see that our application is running , and that the redirected por is 32768. Also, we can see some statistics about the image and the ID and a fantasy name, we can use any of them to refer it in any docker command like stop, rm, etc.

If we access with our browser to the url http://localhost:32768/TinyBlog we can see the running application.

Once, we have our container we can do any other thing that we can with containers. From stopping it, resuming it, using it in collaboration with other containers or in a multi container infrastructure. But that…. is for other story.


This small post is just to show how the different tools and technologies of Pharo can easily be integrated with state-of-the-art solutions. If it is interest to the community, this post can be the start of nice infrastructure serie.

First Apple M1 Pharo Version

This post is now out-dated. The current version is available in Pharo Zero-Conf, and in the future in the Pharo Launcher.
Check https://get.pharo.org/

After receiving the new Apple Mini with the M1 processor, we are producing the first version of the Pharo VM. This version, is a base version that lacks JIT optimizations and requires external libraries (it is not build as a bundle). However, it is a good step forward to have a working version in this new combination of architecture and OS. Also, this VM, even without JIT, has better performance than the VM with JIT using Rosetta 2.

We are going to start soon the final stroke in the development of the new version including the JIT, as mostly of it reuses the one already done for Linux ARM64 and Windows ARM64. The required changes are linked with changes done by Apple in the Operating System API, and some “Security improvements” of the new OS.


In this first version, it is required to have installed some libraries with Brew (https://brew.sh/). These requirements will be removed in next versions.

The packages to install are:

  • cairo
  • freetype
  • sdl2
  • libgit2

Linking LibGit2

Pharo 9 is expecting to use LibGit2 1.0.1 or 0.25, but Brew includes the version 1.1.0. To fix this problem we can link the version 1.1.0 as 1.01. This is a temporal hack as the correct version will be shipped in a release of the VM.

For doing so, we need to execute:

cd /opt/homebrew/lib
ln -s libgit2.1.1.0.dylib libgit2.1.0.0.dylib

Downloading the VM

The VM is available in Pharo file server at: http://files.pharo.org/vm/pharo-spur64-headless/Darwin-arm64/PharoVM-9.0.0-ef1fc42b8-Darwin-arm64-bin.tar.gz

You can download and execute it. Watch out, this VM is for Pharo 9 images.

To correctly find the libraries provided by Brew, we need to execute the VM from the terminal with (is a single command):

LD_LIBRARY_PATH=/opt/homebrew/lib ./Pharo.app/Contents/MacOS/Pharo

In case the VM is not open because it has been put in quarantine (as it is not signed), you can allow execution of it doing:

xattr -d com.apple.quarantine Pharo.app


In the following weeks, we are going to provide a complete version of the Pharo VM integrated in the system and running as the one for Intel X64.

Debugging CMake project on Windows ARM64 with Visual Studio

If you have a Windows ARM64 machine such as the Surface Pro X, chances are you may want to debug native ARM64 applications with it. However, as of today 2/12/2020, Windows does not support local debugging of ARM64 applications, but only remote debugging. Moreover, CMake projects cannot be configured to use remote debugging, or I did not find it after hours of searching and googling :).

This page covers how to debug the CMake project of the VM on ARM64 using the Windows remote debugger and Visual studio. The remote debugger can be used from a separate machine, or from the same machine too, giving mostly the impression of a local debugger. Yet, there are some glitches and remaining problems.

Installing the Windows Remote Debugger on the ARM64 Machine

The first thing to do is to install the Windows Remote Debugger application on the target machine, the machine we want to debug on.
The instructions are in here.

Basically, just install the remote tools package, and open it a first time to set up the network configuration.
Make sure you go to the options and you check the port of the connection or set it to the port of your preference

Getting rid of CMake-VS integration (?)

Visual Studio CMake integration is nice, though it does not support our debugging use case.
Visual Studio CMake integration so far lacks proper support for ARM64 configurations, and most of the debugging options and features one can set from a normal project.
So, instead of opening a CMake-tailored Visual Studio project, we are going to create a normal Visual Studio solution from the command line, and then open it as a normal solution.

To manually create it run the following, specifying your own configuration arguments.
Notice that this post was only tested in Visual Studio 2019.

$ cmake -B ARM64Solution -S ../repo -G "Visual Studio 16 2019" -A ARM64

Notice that the solution we did create will not contain the Slang-generated source files of the VM. If you want to generate them, you may run from the command line the following, which we support for now only on x86_64 machines.

$ cmake --build ARM64Solution --target generate-sources

Otherwise, copy them from some previous generation if you already have them, as I do, and use the following command to create your project instead (you may want to look at the full set of options in here):

$ cmake -B ARM64Solution -S ../repo -G "Visual Studio 16 2019" -A ARM64 -DGENERATE_SOURCES=FALSE

Now you will see CMake has created a lot of .sln and .vcxproj files.
Open the solution using Visual Studio: Done! You’re almost there!

Configuring the Project for debugging

The basic information to debug the VM using this setup is now the one described in here: how to remote debug c++ apps. Basically this can be resumed in two steps: 1) configure debugging to use remote debugging on localhost:ourPort and 2) set up deployment of binaries.

Step 1, configure debugging to use remote, can be easily done as specified in the link above: right click on the project, debugging, turn on remote debugging, configure the fields as in the link.

Step 2, set up deployment of binaries, is required because otherwise the debugging runtime seems to not be available by default in the machine. Deployment takes care of deploying the windows debugging runtime too.

Finally, an extra problem I’ve found was that CMake creates some extra targets/projects ALL_BUILD and ZERO_CHECK that cannot be properly deployed. I removed them from the solution and everything worked like a charm.

Now clicking on the run/debug button will compile, “remotely” deploy, launch, and connect to the VM, and you’ll be debugging it natively in ARM64!

To finish, some caveats

For starters, all this dance between CMake and Visual Studio makes it difficult to find proper information online. What is clear is that CMake has far more features than what Visual Studio supports from it: for example, we cannot build our CMake project from Visual Studio on ARM64 yet without doing some manual mangling as the one in this post.

Also, manually removing the ALL_BUILD and ZERO_CHECK projects to debug does not seem the very best solution, I’d like to have something more straight forward that works by default.

Let’s hope that VS CMake integration and support for ARM64 local debugging comes soon.

A VM bug?… No, an image one

Today, some Pharo users asked why we have lost a nice feature in Mac OS X. In this operating system, it is possible to Cmd-Click on the title of a window, if the window represents an open document, a nice menu showing all the path to it appears. Also, the user can select to open Finder (the file explorer) in any of this directory.

Menu that Appear with CMD+Click on the title of a window

This feature was not available anymore in the latest Pharo 9 VM. What happened? Does the VM has a regression? Do we need to throw everything away and use another programming language :)? Let’s see that this is not the case. And also, it is a nice story of why we want the in-image development in the center of our life.

Where is the Window handled?

One of the main features introduced in the VM for Pharo 9 is that all the handling of the events and UI is done in the image side. The so called “headless” VM has no responsibility in how the world is presented to the user.

When the image starts, it detects if it is running in the “headless” VM. If it is the case, it knows it should take the responsibility to show a window to the user. Also, the image is now able to decide if we want to show a window or not and what kind of window we want. In this case, we want to show the Morphic-based world.

To handle the events, render and show a window, the image uses SDL as its backend. This is one of the possible backends to use, but we are not going to talk about other than SDL in this article. The creation of the window and its events is done through the OSWindow package, if you want to see it.

SDL provides a portable way of implementing a UI in a lot of different architectures and operating systems. Allowing us to use the same code in all of them. Also, the image is using the FFI bridge that does not present a major slowdown for managing events and redrawing.

But.. why is this important or better?

One of the key points is portability, so the same code can be executed in different platforms, but it is not the only one. Also, it allows the image to decide how to handle the UI. Doing so, it allows applications built on top of Pharo to create the UI experience they desire.

A final benefit, that in this case is more relevant for us, is the flexibility to modify it from the image, and to do it in a live programming fashion.

We think all these points give more ability to the users to invent their own future.

Solving the Issue

This issue is a simple one to resolve. We need to only take the Cocoa window (the backend used by all OSX applications) and send the message setTitleWithRepresentedFilename:, something like the following code will do the magic.

[pharoWindow setTitleWithRepresentedFilename: @'./Pharo.image']

But… this solution is not possible:

  1. We need to access the Cocoa window.
  2. This code is in ObjectiveC.
  3. We want it portable: we want the issue fix, but we want all the other platforms to continue working also.

Let’s solve all the problems from our nice lovely image.

Accessing the Window

The first point is easy to solve. SDL and the Pharo bindings expose a way of accessing the handler of the real Cocoa window that SDL is using. SDL exposes all the inner details of a window through the WMInfo struct.

wmInfo := aOSSDLWindow backendWindow getWMInfo.
cocoaWindow := wmInfo info cocoa window.

Talking with the Cocoa Framework

The Cocoa Framework exposes all its API though the use of ObjectiveC or Swift. None of them we can use directly. Fortunately, there is a C bridge to communicate to the ObjectiveC objects. It is exposed through a series of C functions. And, we can use the Unified-FFI support of Pharo to call these functions without any problem. Here it is the description of this API.

We can use a wrapper of these functions that has been developed for Pharo: estebanlm/objcbridge. However, we only need to call a single message. So, let’s see if we can simplify it. We don’t want to have the whole project just for doing a single call. If you are interesting of a further implementation or using more Cocoa APIs, this a good project to check and it will ease your life.

As we want a reduced version of it, we are going to use just three functions, with its corresponding use through Unified FFI:

SDLOSXPlatform >> lookupClass: aString
   ^ self ffiCall: #(void* objc_lookUpClass(char *aString))
SDLOSXPlatform >> lookupSelector: aString
  ^ self ffiCall: #(void* sel_registerName(const char *aString))
SDLOSXPlatform >> sendMessage: sel to: rcv with: aParam
  ^ self ffiCall: #(void* objc_msgSend(void* rcv, void* sel, void* aParam))

The first two functions allows us to resolve an Objective C class and a selector to call. The third one allows us to send a message with a parameter.

As the parameter to the function “setTitleWithRepresentedFilename:” is expecting a NSString (a String in Objective-C), we need to create it with our utf-8 characters. So we have the following helper:

SDLOSXPlatform >> nsStringOf: aString
   | class selector encoded param |
   class := self lookupClass: 'NSString'.
   selector:= self lookupSelector: 'stringWithUTF8String:'.

   encoded := aString utf8Encoded.
   param := ByteArray new: encoded size + 1.
   param pinInMemory.

   LibC memCopy: encoded to: param size: encoded size.
   param at: encoded size + 1 put: 0.

   ^ self sendMessage: selector to: class with: param

So, we can set the file location just executing:

aParam := self nsStringOf: aString.

wmInfo := aOSSDLWindow backendWindow getWMInfo.
cocoaWindow := wmInfo info cocoa window.

selector := self lookupSelector: 'setTitleWithRepresentedFilename:'.

self sendMessage: selector to: cocoaWindow getHandle with: aParam.

self release: aParam. "It sends the message #release to the objective-C object, important for the reference counting used by Obj-C"

Doing it portable

Of course this feature is heavy related with the current OS. If we are not in OSX, all this code should not be executed. To do so, the best alternative is to have a strategy per platform. This idea may look an overkill but it allows us better modularization and extension points for the future.

Also, it is a good moment to implement in the same way some specific code for OSX that was using an if clause to see if it was in OSX.

So, the following strategy by platform is implemented:

In the strategy, there is a Null implementation that does nothing. This is used by all other operating systems, and an implementation that is used by OSX. This implementation for OSX has all the custom code needed to change the file associated with the window.

This strategy is then accessed through extension methods in the OSPlatform subclasses. One important point is to do this through extension methods, as we don’t want to introduce a dependency from OSPlatform to SDL.

For the OSX platform:

MacOSXPlatform >> sdlPlatform
   ^ SDLOSXPlatform new

For the others:

   ^ SDLNullPlatform new


Presenting the solution to this issue was a good excuse to present the following points:

  • How to introduce platform dependent code without bloating the system with Ifs.
  • How to interact with the operating system through FFI.
  • How we can take advantage of the image controlling the event handling and the UI.

We consider these points very important to allow developers to create portable and customizable applications while taking full advantage of the programming capabilities of Pharo.