Debugging CMake project on Windows ARM64 with Visual Studio

If you have a Windows ARM64 machine such as the Surface Pro X, chances are you may want to debug native ARM64 applications with it. However, as of today 2/12/2020, Windows does not support local debugging of ARM64 applications, but only remote debugging. Moreover, CMake projects cannot be configured to use remote debugging, or I did not find it after hours of searching and googling :).

This page covers how to debug the CMake project of the VM on ARM64 using the Windows remote debugger and Visual studio. The remote debugger can be used from a separate machine, or from the same machine too, giving mostly the impression of a local debugger. Yet, there are some glitches and remaining problems.

Installing the Windows Remote Debugger on the ARM64 Machine

The first thing to do is to install the Windows Remote Debugger application on the target machine, the machine we want to debug on.
The instructions are in here.

Basically, just install the remote tools package, and open it a first time to set up the network configuration.
Make sure you go to the options and you check the port of the connection or set it to the port of your preference

Getting rid of CMake-VS integration (?)

Visual Studio CMake integration is nice, though it does not support our debugging use case.
Visual Studio CMake integration so far lacks proper support for ARM64 configurations, and most of the debugging options and features one can set from a normal project.
So, instead of opening a CMake-tailored Visual Studio project, we are going to create a normal Visual Studio solution from the command line, and then open it as a normal solution.

To manually create it run the following, specifying your own configuration arguments.
Notice that this post was only tested in Visual Studio 2019.

$ cmake -B ARM64Solution -S ../repo -G "Visual Studio 16 2019" -A ARM64

Notice that the solution we did create will not contain the Slang-generated source files of the VM. If you want to generate them, you may run from the command line the following, which we support for now only on x86_64 machines.

$ cmake --build ARM64Solution --target generate-sources

Otherwise, copy them from some previous generation if you already have them, as I do, and use the following command to create your project instead (you may want to look at the full set of options in here):

$ cmake -B ARM64Solution -S ../repo -G "Visual Studio 16 2019" -A ARM64 -DGENERATE_SOURCES=FALSE

Now you will see CMake has created a lot of .sln and .vcxproj files.
Open the solution using Visual Studio: Done! You’re almost there!

Configuring the Project for debugging

The basic information to debug the VM using this setup is now the one described in here: how to remote debug c++ apps. Basically this can be resumed in two steps: 1) configure debugging to use remote debugging on localhost:ourPort and 2) set up deployment of binaries.

Step 1, configure debugging to use remote, can be easily done as specified in the link above: right click on the project, debugging, turn on remote debugging, configure the fields as in the link.

Step 2, set up deployment of binaries, is required because otherwise the debugging runtime seems to not be available by default in the machine. Deployment takes care of deploying the windows debugging runtime too.

Finally, an extra problem I’ve found was that CMake creates some extra targets/projects ALL_BUILD and ZERO_CHECK that cannot be properly deployed. I removed them from the solution and everything worked like a charm.

Now clicking on the run/debug button will compile, “remotely” deploy, launch, and connect to the VM, and you’ll be debugging it natively in ARM64!

To finish, some caveats

For starters, all this dance between CMake and Visual Studio makes it difficult to find proper information online. What is clear is that CMake has far more features than what Visual Studio supports from it: for example, we cannot build our CMake project from Visual Studio on ARM64 yet without doing some manual mangling as the one in this post.

Also, manually removing the ALL_BUILD and ZERO_CHECK projects to debug does not seem the very best solution, I’d like to have something more straight forward that works by default.

Let’s hope that VS CMake integration and support for ARM64 local debugging comes soon.

A VM bug?… No, an image one

Today, some Pharo users asked why we have lost a nice feature in Mac OS X. In this operating system, it is possible to Cmd-Click on the title of a window, if the window represents an open document, a nice menu showing all the path to it appears. Also, the user can select to open Finder (the file explorer) in any of this directory.

Menu that Appear with CMD+Click on the title of a window

This feature was not available anymore in the latest Pharo 9 VM. What happened? Does the VM has a regression? Do we need to throw everything away and use another programming language :)? Let’s see that this is not the case. And also, it is a nice story of why we want the in-image development in the center of our life.

Where is the Window handled?

One of the main features introduced in the VM for Pharo 9 is that all the handling of the events and UI is done in the image side. The so called “headless” VM has no responsibility in how the world is presented to the user.

When the image starts, it detects if it is running in the “headless” VM. If it is the case, it knows it should take the responsibility to show a window to the user. Also, the image is now able to decide if we want to show a window or not and what kind of window we want. In this case, we want to show the Morphic-based world.

To handle the events, render and show a window, the image uses SDL as its backend. This is one of the possible backends to use, but we are not going to talk about other than SDL in this article. The creation of the window and its events is done through the OSWindow package, if you want to see it.

SDL provides a portable way of implementing a UI in a lot of different architectures and operating systems. Allowing us to use the same code in all of them. Also, the image is using the FFI bridge that does not present a major slowdown for managing events and redrawing.

But.. why is this important or better?

One of the key points is portability, so the same code can be executed in different platforms, but it is not the only one. Also, it allows the image to decide how to handle the UI. Doing so, it allows applications built on top of Pharo to create the UI experience they desire.

A final benefit, that in this case is more relevant for us, is the flexibility to modify it from the image, and to do it in a live programming fashion.

We think all these points give more ability to the users to invent their own future.

Solving the Issue

This issue is a simple one to resolve. We need to only take the Cocoa window (the backend used by all OSX applications) and send the message setTitleWithRepresentedFilename:, something like the following code will do the magic.

[pharoWindow setTitleWithRepresentedFilename: @'./Pharo.image']

But… this solution is not possible:

  1. We need to access the Cocoa window.
  2. This code is in ObjectiveC.
  3. We want it portable: we want the issue fix, but we want all the other platforms to continue working also.

Let’s solve all the problems from our nice lovely image.

Accessing the Window

The first point is easy to solve. SDL and the Pharo bindings expose a way of accessing the handler of the real Cocoa window that SDL is using. SDL exposes all the inner details of a window through the WMInfo struct.

wmInfo := aOSSDLWindow backendWindow getWMInfo.
cocoaWindow := wmInfo info cocoa window.

Talking with the Cocoa Framework

The Cocoa Framework exposes all its API though the use of ObjectiveC or Swift. None of them we can use directly. Fortunately, there is a C bridge to communicate to the ObjectiveC objects. It is exposed through a series of C functions. And, we can use the Unified-FFI support of Pharo to call these functions without any problem. Here it is the description of this API.

We can use a wrapper of these functions that has been developed for Pharo: estebanlm/objcbridge. However, we only need to call a single message. So, let’s see if we can simplify it. We don’t want to have the whole project just for doing a single call. If you are interesting of a further implementation or using more Cocoa APIs, this a good project to check and it will ease your life.

As we want a reduced version of it, we are going to use just three functions, with its corresponding use through Unified FFI:

SDLOSXPlatform >> lookupClass: aString
   ^ self ffiCall: #(void* objc_lookUpClass(char *aString))
SDLOSXPlatform >> lookupSelector: aString
  ^ self ffiCall: #(void* sel_registerName(const char *aString))
SDLOSXPlatform >> sendMessage: sel to: rcv with: aParam
  ^ self ffiCall: #(void* objc_msgSend(void* rcv, void* sel, void* aParam))

The first two functions allows us to resolve an Objective C class and a selector to call. The third one allows us to send a message with a parameter.

As the parameter to the function “setTitleWithRepresentedFilename:” is expecting a NSString (a String in Objective-C), we need to create it with our utf-8 characters. So we have the following helper:

SDLOSXPlatform >> nsStringOf: aString
   | class selector encoded param |
   class := self lookupClass: 'NSString'.
   selector:= self lookupSelector: 'stringWithUTF8String:'.

   encoded := aString utf8Encoded.
   param := ByteArray new: encoded size + 1.
   param pinInMemory.

   LibC memCopy: encoded to: param size: encoded size.
   param at: encoded size + 1 put: 0.

   ^ self sendMessage: selector to: class with: param

So, we can set the file location just executing:

aParam := self nsStringOf: aString.

wmInfo := aOSSDLWindow backendWindow getWMInfo.
cocoaWindow := wmInfo info cocoa window.

selector := self lookupSelector: 'setTitleWithRepresentedFilename:'.

self sendMessage: selector to: cocoaWindow getHandle with: aParam.

self release: aParam. "It sends the message #release to the objective-C object, important for the reference counting used by Obj-C"

Doing it portable

Of course this feature is heavy related with the current OS. If we are not in OSX, all this code should not be executed. To do so, the best alternative is to have a strategy per platform. This idea may look an overkill but it allows us better modularization and extension points for the future.

Also, it is a good moment to implement in the same way some specific code for OSX that was using an if clause to see if it was in OSX.

So, the following strategy by platform is implemented:

In the strategy, there is a Null implementation that does nothing. This is used by all other operating systems, and an implementation that is used by OSX. This implementation for OSX has all the custom code needed to change the file associated with the window.

This strategy is then accessed through extension methods in the OSPlatform subclasses. One important point is to do this through extension methods, as we don’t want to introduce a dependency from OSPlatform to SDL.

For the OSX platform:

MacOSXPlatform >> sdlPlatform
   ^ SDLOSXPlatform new

For the others:

   ^ SDLNullPlatform new


Presenting the solution to this issue was a good excuse to present the following points:

  • How to introduce platform dependent code without bloating the system with Ifs.
  • How to interact with the operating system through FFI.
  • How we can take advantage of the image controlling the event handling and the UI.

We consider these points very important to allow developers to create portable and customizable applications while taking full advantage of the programming capabilities of Pharo.

Bisecting Pharo versions to find regressions

From time to time it happens that a bug is accidentally introduced and we realize it several versions later. If the cause of the bug is not clear, one good strategy is to find the piece of code change that introduced the bug, and engineer a test and fix from that change. If we have the entire history of changes of the project, we can then extract this information from the commits.

In this post, we will show how we can apply a bisection of Pharo builds easily using the Pharo Launcher to find the cause of a bug. In particular, I wanted to show the case of a real bug, from which you’ll find the issue here the code completion menu was not being closed when clicking outside of it.

There is git bisect…

Git provides a pretty useful command called git bisect that helps you at finding the culprit commit. Git bisect implements a binary search on commits: it proposes you commits that you have to test and mark as good or bad. Based on how you tag a commit it will look for another commit and eventually find you the exact commit that introduced the problem.

Git bisect can be pretty handy at finding bugs, but it can be pretty heavy when on each step you need to do a long build process just to test it. This is exactly our case when bisecting the pharo history: we need to build an image.

We are not going to go into much details with git bisect, but if you want to see some more docs on it, you can take a look at the official docs in here:

Image bisection with the Pharo launcher

The Pharo launcher has a super fancy feature that can be used for bisection: it allows downloading any previous build of Pharo that is stored in the Pharo file server. This saves us from building an image for each version we are digging in! It is important to know at this point that the Pharo file server stores all succeeding builds of Pharo, which are almost all of them, and that there is a build per PR. So this will save us some time at attacking the issue but it will be a bit less precise because a PR can contain many commits. However, once the PR is identified, in general the commits in it will be all related.

In the Pharo9.0 template category we have listed all its builds with number and associated commit

Once we know this we can do the bisect ourselves. For example, if we want to test the entire set of 748 builds, we will first test 748 / 2 = build #374. If it is broken, it means that the problem was introduced in between builds #1 and #374, and we need to continue testing with 374 / 2 = build #187. Otherwise the bug was introduced between build #374 and build #748 and we should test with 748 + 374 / 2 = build #561. We can continue like that until we find the build X where X is working and X+1 is broken.

The advantage of doing it as a binary search comes from the fact that we cut the space search by 2 every time. This makes the search a log2 operation. In practical terms: if we have 1000 commits, we will have to do log2 1000 = ~10 searches to find the culprit. Which is far better than linearly searching the 1000 commits one by one :).

Finding the problematic PR

The issue we were interested in did not exist on build #162 and it stopped working in build #163. The next step is to understand what was introduced in build #163. Once we have the breaking build, we need to obtain the PR that lead to the change. We can obtain the PR commit from the image file name given by the launcher, or we can get it from the about and help dialogs in Pharo.

The about and help dialogs have precise information of how the image was built.

Once we have the commit, the next step is to look for it in git in your favorite tool: the command line, iceberg, any external GUI based git tool, or even github. Since there are no integrations in Pharo that are not pull requests, each build commit will effectively be a PR merge commit. In our real bug example, the problematic commit was this one, which was the integration of a PR I did myself ( 🙂 ).

The problematic commit is always a PR integration in Pharo

Now that we have the problem, we can engineer a fix for it.

Analyzing the bug

So again, this was working on build #162. It stopped working with my changes in #163. To understand the issue my strategy was the following: compare how the execution flows in both the working and non working builds.

My first step was to understand how the correct case was working in build #162. I added a breakpoint in the closing of the code completion menu, tried to auto-complete something and clicked outside. The stack trace looked as follows:


We can see that the closing of the code completion menu happens when it loses the keyboard focus. Looking at the variables in the stack, the focus before the click was:

a RubEditingArea(583942144)

And the click is requesting focus on

a RubTextScrollPane(76486144)

But going a bit up in the stack, the code that produces the change in the focus is

NECMenuMorph >> mouseDown: anEvent
        self flag: #pharoFixMe "ugly hack".
	engine editor morph owner owner   <--------
		handleMouseDown: evt.

A more insidious question: why does the NECMenuMorph receives the click? If I clicked outside of it!!! That is because the menu morph requests “mouse focus” when it is shown

NECMenuMorph >> show
	self resize.
	self activeHand 
		newMouseFocus: self.
	self changed.

Comparing to the parent commit

A similar analysis in build 163 shows:

  1. NECMenuMorph does not receive the mouseDown event
  2. The NECMenuMorph never becomes mouseFocus (I added a traceCr on mouse focus change, see below)
mouseFocus: aMorphOrNil
  aMorphOrNil traceCr.

3. The code of show was changed (by myself) to not change the focus if no hand is available.

	self resize.
	self activeHand ifNotNil: [ :hand | hand newMouseFocus: self ].
	self changed.

The problem was that the NECMenuMorph was trying to access the hand before being installed in the world! And the current hand depends on the world where we are installed. Otherwise, we need global state, which I was trying to minimize :)…

A solution

The solution I implemented was to call show once we are sure the morph is in the world.


	super openInWorld.
	self show

And avoid calling show before it:


	self selected: 0.
	firstVisible := 1.
	context narrowWith: context completionToken.
	(context entries size = 1 and: [ context entries first contents = context completionToken ]) ifTrue: [
		self delete.
		^ false ].
	context hasEntries ifTrue: [ self selected: 1 ].
	^ true

This would mean that we can inline show in the openInWorld method and then remove the conditional. We can also argue that show is not morphic vocabulary…

	super openInWorld.
        self resize.
	self activeHand newMouseFocus: self.
	self changed.


In this post we have seen how we can chase a regression in Pharo by bisecting Pharo builds. Once the culprit build is identified, we can navigate from there to the pull request and thus the commits that cause the problem.

We have also shown how to compare two executions to understand the differences in behaviour, and finally how this was fixed in a real case.

Download musics from Google Play Music


Google Play Music (GPM) is a service proposed by Google to listen musics online (like Spotify, Deezer, …). Having a premium subscription, I can listen a lot of music by using the online service, but when I have no internet connection… I cannot 😦 . So I wanted to download the music ^^.

> This might be illegal, so, I used this situation to explain the process to use Pharo to download musics from GPM but you must not use this for real.


My idea is simple: if I can listen to musics from my computer, it means my computer has to download the music. I know that musics coming from GPM are in the mp3 format. So the process to download the music is simple:

  1. Access the my GPM library.
  2. For each music download the corresponding mp3 file.
  3. Set the metadata of each music.

Access my GPM library

There is no official API for GPM service, however, the gmusicapi python project has been developed to create an unofficial API. This API allows us to access every element of our GPM library.

I’m not that good in Python, but I know it is possible to control python over Pharo. So I decided to use the PyBridge project of Vincent Aranega.

PyBridge allows us to use python language in Pharo. So, I’ll use it to load and use the unofficial GPM API.

Set up PyBridge

PyBridge is currently a work in progress and consequently requires a little set up. One needs to download the server project and the Pharo client project.

For the Pharo client project, it is super easy. I only need to download the project from GitHub and install the baseline:

Metacello new
    baseline: 'PyBridge';
    repository: 'github://aranega/pybridge/src';

For the Server project, the project is inside the python branch of the git repository. It requires pipenv to simply setup python vritual environments. So clone it in another folder and create a virtualenv by doing a simple:

$ pipenv install

Then, install the gmusicapi and run the server by executing the following commands:

$ pipenv shell
(pybridge) $ pip install gmusicapi
(pybridge) $ python

Congratulations! You have correctly set up PyBridge to use the gmusicapi library!

Log in GPM

Before using the library, I need to log in inside GPM. To do so, I will use gmusicapi. The usage of the python library in Pharo is pretty forward as PyBridge exposes python objects in a Smalltalk fashion.

| mobileClient api |

"Access to the API class"
mobileClient := PyBridge load: #'gmusicapi::Mobileclient'.
"Create a new instance"
api := mobileClient new.
"Create authentification key"
api perform_oauth. "This step must be done only once by GPM account to get a oauth key."

"Login using oauth key"
api oauth_login: 'XXXXX' "XXXXX is my private key ^-^"

Nice! I have now a full access to the GPM API using PyBridge and Pharo.

Download mp3 files

GPM does not allow the users to download music. However, it is possible to ask for the audio stream in a mp3 format. I will use this to download the files ^-^.

In the following, I will present an example to download the album Hypnotize of System Of A Down. The album is in my GPM library so I can retrieve it in “my songs”.

To download the musics, I will access to all my musics libraries, select the music that belongs to the album, and then download the musics.

"access to all my songs"
library := api get_all_songs. "get_all_songs is part of the python library".

0 to: (library size - 1) do: [:index | "take care with index in python"
    | music |
    music := (library at: index)
    ((music at: #album) literalValue beginsWith: 'Hypnotize') "is the music at index part of the album?"
        ifTrue: [
            | fileRef |
            fileRef := ('/home/user/music' asFileReference / ((music at: #title), '.mp3')).
            fileRef binaryWriteStreamDo: [:mp3WriteStream |
                (ZnEasy get: (api get_stream_url: (music at: #id))) writeOn: mp3WriteStream. "download the file"

I have now downloaded all the music of the album. To summarize:

  1. Pharo asks for all songs to Python.
  2. Then Pharo iterates on the Pyhton Map to select the correct musics.
  3. It asks to Python the URL stream for a Music.
  4. And it uses Zinc to download the music and creates the mp3 file.

Set the metadata

Our strategy works pretty well but the metadata of the mp3 files are not set. It can not be a problem but it is preferable when using a music manager (such as Clementine, Music Bee, Itunes, …). So, I will use VLC to set the metadata of our files. It is possible to use VLC through Pharo using the Pharo-LibVLC project.

Set Up Pharo LibVLC

Installing the FFI binding of VLC for Pharo is easy. You need to: (1) install VLC, and (2) install Pharo-LibVLC.

Metacello new
  baseline: 'VLC';
  repository: 'github://badetitou/Pharo-LibVLC';

Then, it is possible to use VLC in Pharo after initializing it.

vlc := VLCLibrary uniqueInstance createVLCInstance

Set the metadata

Inside the previous script, I insert the code to set metadata using VLC.
First, I create a reference to the mp3 file for VLC, then I set the metadata using VLC API.

| media |
media := vlc createMediaFromPath: fileRef fullName. "create mp3 reference for VLC"
media setMeta: VLCMetaT libvlc_meta_Album with: (music at: #album) literalValue asString.
media setMeta: VLCMetaT libvlc_meta_Title with: (music at: #title) literalValue asString.
media saveMeta.
media release.

In the example, I only set “album” and “title” attribute but it is possible to set more metadata.


I have used Zinc, VLC, and Python with a Python library to download musics for Google Play Music service. It shows how easy it is to use Pharo with other programming languages and I hope it will help you to create many super cool projects.


How to play Sound in Pharo

This is a brief post on how to load the sound package, enable it and play some sound samples in Pharo 9.0. For Pharo 9.0, we fixed the sound support by refactoring and using SDL2 for enqueuing the playback of sound samples. The current version only supports sound playback, but it does not support yet sound recording from a microphone.

Downloading a clean Pharo 9 image and VM

Some users have reported on following these instructions on older of Pharo. In case a weird problem is obtained such as “Failed to open /dev/dsp”, we recommend to download the latest Pharo 9 image and headless virtual machine. This image and VM can be downloaded through the Pharo Launcher, manually through the files server in the Pharo website, or by executing the following Zeroconf bash script in Linux or in OS X:

curl | bash

Loading the Sound Package

The first step required to be able to play sound in Pharo 9.0 is to load the Sound package. The Sound package is not included by default in the main Pharo image, so it has be loaded explicitly. The following Metacello script can be used for loading the Sound by doing it in a Playground:

Metacello new
baseline: 'Sound';
repository: 'github://pharo-contributions/Sound';

Setting for enabling Sound

Loading the sound package is not enough to be able to play sound in Pharo. In addition to loading this package, it is required to enable sound playback under the Settings browser. After the Sound package is loaded, under the Appearance category, a setting named “Sound” appears with a checkbox that needs to be enabled to activate sound playback.

Examples for playing Sound samples

The Sound package bundles several software based synthesizers, so it is not required to load explicit wave (.WAV) files in order to play samples and music for testing it. The following is an example script for playing major scale with an electric bass patch that is generated through FM synthesis:

(FMSound lowMajorScaleOn: FMSound bass1) play

Since we are inheriting this package from older versions of Pharo, we do not comprehend yet all of the features for sound and music synthesis that are provided by this package. However, we recommend to look on the existing instrument examples that are present in the class side of the AbstractSound and FMSound classes.

Wave samples (.wav) from disk can be loaded and played through the SampledSound class. For example, if we have a sound sample in a file named test.wav, in the same folder as the image, we can load it and play it with the following script:

(SampledSound fromWaveFileNamed: 'test.wav') play

The most complicated and spectacular example that is bundled in the Sound package is a playback of the Bach Little Fugue with multiple stereophonic voices. This example can be started with the following short script in a Playground:

AbstractSound stereoBachFugue play

If you want to

If you want to contribute…

The sound package is hosted on and you can really help us to improve it.

Metacello new
  baseline: 'Sound';
  repository: 'github://pharo-contributions/Sound';

Implementing Indexes – Replacing the Dictionary

This is the fourth entry of the series about implementing full-text search indexes in Pharo. All started with the first entry Implementing Indexes – A Simple Index, where we present the need for having indexes in large images. Then, we have the second entry is Implementing Indexes – Who uses all my memory, where we analysed the first version and the problems it has. And then, the third entry Implementing Indexes – Compressing the Trie where we show some improvements in the Trie implementation.

This fourth and final entry analyses the remaining problem in the implementation: the space taken by the identity dictionary.

Remember the Problem

When we check the result of analysing the memory impact of our solution, as we have seen in the previous blog entries. We have the following table:

Class name# InstancesMemory (Bytes)Memory(%)
Memory footprint of our solution.

We can see that the main memory is taken by 4 classes (Array, CTTrieNode, IdentityDictionary and Association). Also, it is clear that we have a relation between the number of instances of these classes and the amount of nodes in the Trie.

If we check the raw structure of our nodes we have something like this:

Inspecting our Trie

We can see that each node has an IdentityDictionary with a Character as key and a CTTrieNode. This creates the structure of nodes that we have in the Trie. From this, we can explain the number of instances of IdentityDictionary but where are all the arrays and associations? They are taking 55% of the memory, so we have to find them.

If we continue exploring, we can see how IdentityDictionary is implemented in Pharo.

Inspecting an IdentityDictionary

Dictionaries in Pharo are implemented by using an internal array. This array contains Associations. Each association has a key and a value. Those are the keys and values of the dictionary. We can see this when we continue inspecting the objects.

Also, the associations are stored in the array. The associations are not in the array in sequential positions, there are holes in the middle. Each association is stored in the position that is associated with the hash of the key. So, it is easier to look for the association when accessing by the key (remember, that the Dictionaries are optimized to access by the key). You can check the comments in classes like Dictionary and HashedCollection to understand this behavior.

Finally, to improve the speed of adding new elements, the arrays have always free positions.

This design is great to provide a fast access to the information, to speed up the access and the insertion of new elements. However, as always, we are trading speed for space. In our specific case, we want to improve the memory footprint.

Basically we need to address:

  • Remove the need to have an IdentityDictionary and an Array. Can we have a single instance?
  • Remove the use of associations
  • Remove the empty slots in the array.
  • Doing all the previous without destroying the access performance 🙂

A compromise solution

When solving problems in the real world we need to make trade-offs. Our solution is slower, but it takes less space. So, we need to balance these two problems.

We have solved the problem by implementing the dictionary using a single array without associations. The TrieNode will store each pair key-value directly in a compact array (the array is created every time with only the occupied space). Each key-value pair is stored in the array in sequential positions, and in the order they were added to the node.

For example if we have the pairs key -> value, added in that order:

$C -> nodeC.
$A -> nodeA.  
$B -> nodeB. 

The array will contain them as:

{$C. nodeC. $A. nodeA. $B. nodeB} 

So, in the optimized version the nodes have this structure:

Optimized Nodes, reimplementing the dictionary behavior

The keys are stored in the odd positions of the array, and the values in the even positions. Each key is next to its value. So, it is still possible to access the value from the key and the key from the value, but it requires to traverse the whole array.

Benefits of the solution

If we analyse the impact of the new solution, we can see that the memory footprint has been reduced to a less of a tenth of the original solution. This is the combination of the optimization of the last blog post and the one done here.

Class name# InstancesMemory (Bytes)Memory(%)

A clear improvement is the size of the arrays. Not only we have less arrays (because we have less nodes, thanks to the improvement done in the previous blog post) but also each array occupies less space, because they have less empty slots.

Problems of the Solution

As we have said before, there is no perfect general solution, but correct solutions for specific situations. In this case, we have put the accent in the memory impact, but it has the following drawbacks.

  • Our solution implements a custom data structure, so we needed to implement it from scratch. This introduces possible points where we have bugs and problems that we have not seen.
  • We are not taking advantage of the implementation of Dictionary in the image, so we are duplicating code.
  • We are penalizing the creation of new nodes, as the array has to be copied into a larger one.
  • The access to this data structure is linear, so we are penalizing the access time also. The complete array has to be traversed to access or check if an element is there.
  • We have to carefully crafted the code to access the data structure not to penalize the execution time more than we can allow. This code is performance sensitive and any change to it has to be carefully tested and measure; benchmarks and profiling are good friends.

These drawbacks are not important in our current situation, but can make this optimization technique unusable.


In this Trie-logy of blog entries we have presented a real problem that we have seen during the development of Pharo. The intention of this serie is to not only present a solution but to present the tools and the engineering process we did to arrive to the solution. The process and the tools are more valuable than this specific solution, as the solution is only valid in our context. So, we consider a good practice that can be useful to other developers in completely different situations.

Implementing Indexes – Compressing the Trie

This is the third entry of the series about implementing full-text search indexes in Pharo. The second entry is Implementing Indexes – Who uses all my memory, where we analysed the first version and the problems it has. You can start reading from this post, but there are things that we have presented in the previous one. For example, how we detected the problem with the basic implementation. Also, maybe if you are new you can check the first post: Implementing Indexes – A Simple Index.

Remembering the Problem

In the last post, we have seen that our implementation is not optimal. It was creating a lot of nodes to store the data we are keeping. In that version, we had 18.484 values stored in the trie, but for those, we needed 159.568 node instances; a ratio of 9 nodes per value. This is unacceptable.

This problem was coming for the way of storing the elements in the Trie. We are creating a node for each character in the key, even if these nodes does not include a branch in the Trie. We have the following structure:

As we have told, it is desirable to collapse all these nodes, and reducing the memory impact.

Collapsing useless Nodes

In the data structure we are analysing, a node provides crucial information in any of the following scenarios:

  • It maps a path from the root to a given value.
  • It has more than a single child, because it has an split in the key path.

So we want to reduce the previous Trie to a new one like this:

In this new implementation, the chain of intermediate nodes that do not provide crucial information is collapsed in a single node. We can see, that all the light blue nodes have been collapsed in a single green one.

For supporting this, we have changed that the node now holds a String instead of a single Character. The key path from the root is now concatenating strings.

With this improvement, we have passed from an occupation rate from 13% to 100%.

This improvement has a little trade-off: We are making the graph simpler in data impact, but we have increased the complexity in the look-up and insertion of new nodes.

This new implementation is available side-by-side with the unoptimized version.

If we create the same example of the previous post, but with the optimized trie:

behaviors := CTOptimizedTrie new. 
SystemNavigation default allBehaviorsDo: [ :aBehavior | 
       behaviors at: aBehavior name put: aBehavior name ].

To perform the analysis, we have used the same tool, SpaceAndTime, that we have used in the previous post.

stats := GraphSpaceStatistics new
rootObject: behaviors;

stats totalSizeInBytes.
stats totalInstances.
stats statisticsPerClass.
stats statisticsPerClassCSV.

We can see the reduced number of nodes in the Trie, and its memory impact. We have passed from 159.568 instances in the previous version to 22.102 instances in this version. We passed from occupying 5.106.176 bytes to 530,448 bytes in nodes.

Additional Strings

However, this is not perfect. As we are keeping a String instead of Characters we have to create such instances in memory. In Pharo, the Strings are objects and are allocated in memory, while Characters are immediate, and they are encoded in the referencing instance. This represents that we have additional 22.099 instances, representing 407.704 additional bytes.

At a first glance it looks like a problem, but if we see that we are just having less than 1MB (adding the nodes and the Strings) against the 5MB of the previous solution. We see that there is an improvement.

This is a nice example, that even having more instances of Strings we have a clear advantage with the new solution. It teaches us that we need to measure before doing an statement.

Splitting Nodes on Insert

As we have said with this new implementation, it is required to split nodes when adding elements in the Trie. We have a more complex insert and delete operation when inserting and removing keys compared with the base implementation.

The following image presents an example.

Adding a new value to the Trie requires to create a new node, and also it might require to split an existing node if part of the key is in a single node.

In this case, the green node, should be split in two, to handle the insertion of the pair ‘ac’ -> 3.

At first glance, it looks that new implementation is slower than the old. But… to assert anything we have to measure it.

We can measure the time to generate the whole Trie of behaviors. For doing so, we will use the #timeToRun message of blocks. So, we execute to measure the times of the new implementation

   behaviors := nil.
   behaviors := CTOptimizedTrie new. 
   SystemNavigation default allBehaviorsDo: [ :aBehavior |
   behaviors at: aBehavior name put: aBehavior name ]
   ] timeToRun. 

And to measure the base implementation:

  behaviors := nil.
  behaviors := CTTrie new.
  SystemNavigation default allBehaviorsDo: [ :aBehavior |
  behaviors at: aBehavior name put: aBehavior name ]
  ] timeToRun.

From the results we can see that the new implementation takes 169 ms and the old implementation 210 ms. As we can see, the initial considerations was misleading.

Again, we have to measure. If we don’t measure it is impossible to compare different solutions.


This post presents a technique that we have used to improve the Trie implementation. Although, the most important part of this post is that we show how to measure the qualities of a solution. Also, we have shown that without measuring, it is impossible to compare or even to take good decisions.

Using previous experience to evaluate a solution is important, but it can be misleading. Measuring similar problems shows us the responses.

We still have an entry in this series. In the last entry, we are going to present how we solved the problem with the IdentitySet and how we have finally went to a tenth of the memory consumption.

Transcript: the misunderstood global

In this blog post, I will discuss why using Transcript can be really badly used. I will present that with some simple care we can develop modular solutions that are flexible and can take advantages of using Transcript without the inconvenients. 

As a general remark, if you want to log better use a real system logging PLAIN real objects and not just dead strings. Because you can do a lot more with objects than mere strings. You can use Beacon (whose core is available in Pharo by default) or other logging frameworks as the one developed by Cyril Ferlicot and based on dynamic variables

Now let us imagine that you still want to log strings.

Transcript: A misunderstood object

Transcript is a kind of stdout on which you can write some strings outputs. It is cheap. The class exposes a stream-based API (and this is a really important design point as we will see in the future).

Here is a typical bad use of Transcript

    Transcript show: 'foo' ; cr  

It is bad because it hard codes a reference to transcript while Pharo proposes some helpers methods such as traceCr:

   self traceCr: 'foo'

Some developers may think that this is not important but it can help you if one day you want to control the logging and for example use an object with the same API but to do something else. So avoid hardcoding globals. But there is more. 

The real concern

The problem amongst others is that Transcript is a singleton and in fact an UGLY global variable. Once you use it for real in your code, you basically killed the modularity of your program and the only thing that you can do is to hope that nothing bad can happen.

Let us look at a concrete simple case. The microdown Parser came (yes we removed this) with a simple method named closeMe:

MicAbstractBlock >> closeMe
    Transcript << 'Closing ' << self class name; cr; endEntry

So this method is producing a little trace so that the parser developer could understand what was happening. So you can think that this is ok.

There are two main problems:

  • First what if you want to deploy your application in a system where you do not want at all to get Transcript its class and all its family. I’m thinking for example about people producing minimal images.
  • Second, when Pharo is built on Jenkins all the tests are executed because we love tests. And this Transcript expression produces dirt on the build log. You do not want to have read such trace when you are trying to understand why the build is not working.
Closing MicCodeBlock
Closing MicCodeBlock
Closing MicCodeBlock
Closing MicCodeBlock
Closing MicCodeBlock
Closing MicCodeBlock
Closing MicCodeBlock
Closing MicCodeBlock
Closing MicCodeBlock
Closing MicCodeBlock
Closing MicCodeBlock
Closing MicCodeBlock
Closing MicHeaderBlock
Closing MicHeaderBlock
Closing MicHeaderBlock
Closing MicHeaderBlock
Closing MicHeaderBlock
Closing MicHeaderBlock
Closing MicHeaderBlock
Closing MicListItemBlock
Closing MicListItemBlock
Closing MicOrderedListBlock

You can say that I’m exaggerating. But let us see the good way to have a log and be able to unplug it.

Encapsulation to the rescue

The solution is really simple. Just use object-oriented programming and encapsulation. To support the Parser developer, we can simply add a stream to the class.

For example we define a new variable to and initialize it to a write stream.

MicAbstractBlock >>initialize
   super initialize.
   logStream := WriteStream on: (String new: 1000)

Then we can rewrite the method closeMe as follows

MicAbstractBlock >>closeMe
logStream << 'Closing ' << self class name; cr

Then we can provide a simple setter method so that the developer can set for example the Transcript as a stream to write to. 

MicAbstractBlock >>logStream:  aStream 
logStream := aStream

If we do not control the creation of instances of the class using the stream, we will have difficulties to configure it. So if want to be able to configure the class to use a different logger, we can define a class variable so that we can send message to the class and at initialization time, we can take the value from the class variable instead of hardcoding the WriteStream.

The net result is that we have the control and we can decide what is happening. In addition we can write tests to make sure that the logging is correct. Because using Transcript makes this a brittle exercise since someone else may write to the Transcript when you do not expect it. 


Transcript is not bad per se. It promotes bad coding practices. Developers should stop listening to the sirens of easy and cheap global variables. With a little bit of care and a limited infrastructure is to possible to get the best of both worlds: modular objects and taking advantages of the existing infrastructure whose Transcript belongs to. 

Object-centric breakpoints: a tutorial

The new Pharo debugger is shipped with object-centric breakpoints. An object-centric breakpoint is a breakpoint that applies to one specific object, instead of being active for all instances of that object’s class. We have two kinds of object-centric breakpoints:

  • The haltOnCall breakpoint: give it a method selector and a target object, and the system will halt whenever the target object executes the corresponding method.
  • The haltOnAccess breakpoint: for a target object, the system will stop whenever the state of that target object is read or written. It is possible to specify which variable whose accesses will trigger the breakpoint.


If you use the object-centric experimental image, you can skip this part and start the tutorial. Else, follow installation instructions here

The tutorial

Context: the OCDBox objects loop

In this example, we will use a class named OCDBox. It models a trivial object named box that holds objects. It has two instance variables, elements and name, and a few methods in its API. In particular, the method addElement: adds an object into the box.

We will use a test to practice object-centric breakpoints. In this test, we instantiate 100 boxes. We iterate over all these boxes to:

  • add an object to each box,
  • print the box on the Transcript.

When you execute the test, the Transcript shows you each box that is printed in the iteration loop. If you look at the code of the OCDBox class, you will see that:

  • adding an element to a box modifies its name,
  • the printing method uses the name to display the box in the Transcript.

Object-centric breakpoints

In the following, we demonstrate the use-cases of the haltOnCall and haltOnAccess breakpoints. Each time, we start by demonstrating the breakpoint through a video, then we explain the use-case and how to identify situations where that object-centric breakpoint might help. All videos have sound.

Warming up with object-centric breakpoints

The following video shows how to install object-centric breakpoints through the inspector. Basically, we create two box objects b1 and b2, and we install various object-centric breakpoints on b2.

Installing object-centric breakpoints
  • We start by inspecting b2,
  • We select the name instance variable in the inspector, and install an object-centric breakpoint through the contextual menu:
    • a halt on read will stop the execution each time the name variable is accessed,
    • a halt on write will stop the execution each time the name variable is written to,
    • a halt on access will stop on both read and writes of name,
  • We select one of the methods of b2 in the inspector, and install a halt on call through the contextual menu:
    • each time this method is called, and the receiver is b2, then the execution halts

In our example, we install a halt on read breakpoint on the name instance variable of b2, and a halt on call breakpoint on the name: method of b2. For each case, the execution will halt only for methods reading the name instance variable in b2, or when b2 calls its name: method. The execution will not halt for b1, nor for any other object.

In the following, we apply our breakpoints on a debugging example. Instead of using the inspector, we install object-centric breakpoints directly from the debugger, as we would do in a real debugging session.

Removing object-centric breakpoints

Object-centric breakpoints are garbage collected along with objects: if your object disappears, then so does the breakpoint.

For existing objects, removing an object-centric breakpoint is as simple as inspecting the object (for example in the debugger), going into the breakpoint pane, then selecting the breakpoint and using the context menu to remove it.

Stopping when a specific object receives a particular message

The test iterates over a hundred boxes objects, and to each object it sends the addElement: message. In this exercise, we select one box object among all the boxes the test is iterating, and we install an object-centric breakpoint on the addElement: method of that object. Then, we proceed the test and the addElement: method is called on each of the boxes. The execution only stops when the selected box executes the addElement: method.

Getting to the object of interest

The first step to use an object-centric breakpoint is to get to the object you want to debug.

For now, we do that by putting a first breakpoint into our code. When the execution stops, we navigate in the debugger to find objects of interest and install object-centric breakpoints.

In the following, we put a self halt. in the test, before iterating over the boxes objects. We execute the test and start from there.

Breaking when the box of interest recives the #addElement: message
  1. After adding a halt (see above), we execute the test:
    • The execution halts and opens a debugger,
    • we are able to select an object to debug it.
  2. We select one of the boxes in the boxes collection:
    • Double-click on the boxes variable in the bottom inspector,
    • choose a box in the items pane by double-clicking on the box.
  3. In the opening inspector, go into the meta-side and select the addElement: method
  4. Right-click on that method, and select halt on call in the menu:
    • The breakpoint is installed on this method, and scoped to our box object,
    • you can see the breakpoint in the breakpoint pane (if you look at other boxes, there are no breakpoint).
  5. Proceed the execution:
    • The test iterates over the boxes and sends the addElement: message to each box,
    • only the box that you instrumented breaks on this method.
Use-cases for the halt on call breakpoint

The typical use-case for this object-centric breakpoint is when you have many instances of the same class running in your program, while you are interested to debug a method for one specific object among these instances. You are interested to answer the following question: “When is this particular object executing this particular method during the execution?”

Our box example illustrates this case. You need to debug a target method for a specific instance of OCDBox, while the test iterates over a hundred of them. Putting a breakpoint in the OCDBox class will stop the execution for every instance calling the target method.

Now imagine that you want to debug a display method in a graphical object. You will have much more instances sharing that display method. Using conventional breakpoints is tedious because it will stop every time one of the graphical object uses that display method.

In contrast, you use the halt on call breakpoint to debug a specific method for a specific object because:

  • You want to avoid designing super-complex conditional breakpoints to filter your object — sometimes you don’t even have enough information to discriminate your object,
  • you do not know when (or if!) the object you want to debug will call the method, so you may not know where to insert a standard breakpoint in the code — and there might be a lot of call sites to that method so you do not know which one your object will go through,
  • you want to avoid stepping many times in the debugger before getting to the point where your object calls the method to debug — also you do not want to miss that point by extra-stepping by mistake!

Breaking when the state of a specific object is written to

In this exercise, we reuse our test iterating over a hundred box objects. We select again a box among all the iterated boxes, but this time we want to stop when the name instance variable of that box is written to. To that end, we install an object-centric breakpoint on all write accesses to that variable in the selected object. Then, we proceed the test and the execution only breaks when the name instance variable of the selected box is modified.

Breaking when the name instance variable of our box object is written

As before, we execute the test and a debugger opens before the boxes are iterated in a loop. In this loop, each box is being printed one by one through the crTrace method.

We select one of the boxes in the boxes collection:

  • Double-click on the boxes variable in the bottom inspector,
  • choose a box in the items pane by double-clicking on the box.

In the opening inspector, go into the raw view and select the name instance variable. Right-click on that method, and select halt on write in the menu:

  • The breakpoint is installed on all write accesses to this variable, and scoped to our box object,
  • you can see the write accesses and the breakpoint installed on them in the breakpoint pane.

We proceed the execution and it breaks when the name variable of the selected box is modified. There is no halt when other boxes have their name variable modified.

Use-cases for the halt on access breakpoint

The typical use-case for this object-centric breakpoint is when you have many instances of the same class running in your program, while you are interested to know when the state of one specific object is modified. You are interested to answer the following question: “When is the state of this particular object modified during the execution?”

Our box example illustrates this case. Putting a breakpoint on every access to the name variable is long, tedious and error-prone (you may forget some). Watchpoints, Variable Breakpoints or Data breakpoints may help: those tools are able to stop execution when you access an instance variable defined in a class. However, they will stop the execution each time the name variable is modified in any instance of the OCDBox class (here, at each loop iteration!).

Imagine again that you are debugging a graphical object. You are interested to know when a given property of that object (i.e., an instance variable) is modified. You do not want all the graphical objects sharing a property to break then execution when that property is modified.

Instead, you use the halt on access breakpoint to stop the execution of a program when a particular property of a specific object is modified:

  • You want to avoid designing super-complex conditional breakpoints to filter your object — sometimes you don’t even have enough information to discriminate your object,
  • you do not know when (or if!) the property will be modified, so you may not know where to insert a standard breakpoint in the code — there might be a lot of methods modifying that property, and many call sites to those methods so you do not know which one your object will go through,
  • you want to avoid stepping many times in the debugger before getting to the point where your object calls the method to debug — and you cannot guarantee that the point where you stopped in the execution will actually modify the variable!

Conclusion: Using object-centric breakpoints for real

We have seen and practiced two kinds of object-centric breakpoints, the halt on call and the halt on access breakpoints. We used a simple example to illustrate how to use those breakpoints. However, in reality it is much more complex to decide when and how to use such breakpoints. Let’s review a few advices:

  • First, investigate to know what you have in hands,
  • put a first breakpoint at a strategic place to find your object, or inspect directly the object if you have it,
  • then discriminate: what is the information you need to find?
    • do you need to know why an object seems to not execute a method properly? Use a halt on call breakpoint on this method!
    • do you need to know when or how an instance variable is modified in an object? Use a halt on write breakpoint on this variable!

Object-centric breakpoints are not a magickal tool and, often, you may not find directly your problem. The execution may stop a few (or many!) times in the same place before you understand your problem. Still object-centric breakpoints are a faster, easier way than conventional breakpoints to find the information you need from your execution.

There are also risks: similarly to conventional breakpoints, an object-centric breakpoint will halt the execution as many times as it is hit at run time. If you install any breakpoint in code that is executed very often, you may interrupt your program for good. Think that if you put a break point on the code editor keystroke behavior, you may not be able to write code anymore!

As a closing note, other breakpoints do exist: breakpoints that stop the execution each time a particular object receives any message, or when two specific objects interact together. These breakpoints are not implemented yet, but they are scheduled for the near future!

Testing UFFI Binding with Travis

Part of software system development is to write tests. More than writing tests, developers want to set up a CI that will automatically (each commit, day, …) check the status of the project tests. While it is easy to use Travis as a CI system thanks to the work of smalltalkCI, setting up Travis to test FFI binding is more tedious.

In the following, I will present the configuration used to test Pharo-LibVLC. Pharo-LibVLC allows one to script VLC from Pharo, so one can play and control music or video.

Travis Set up

First of all, we need to set up the Travis CI. To do so, we create two configuration files: .smalltalk.ston and .travis.yml. Below, I present the default .smalltalk.ston for the VLC Pharo package. It allows one to execute the tests loaded by the VLC baseline. However, such tests might need external library to work, for instance the VLC library.

SmalltalkCISpec {
  #loading : [
    SCIMetacelloLoadSpec {
      #baseline : 'VLC',
      #directory : 'src'


Installing an external library

A project using FFI  may need to install the corresponding library to work. Often, as a FFI binding project developer, you know the main library (in the case of VLC, it is libvlc). However, this library has dependencies (in the case of VLC, it is libvlccore). There are many ways to determine all the libraries you need on your system to use your FFI binding. A simple one is to use the command ldd on the main library.

Once the needed libraries are identified, we have to configure Travis to install those libraries and add them to the PATH. For the installation, the best way is to rely on the apt addon of Travis.

language: smalltalk
dist: bionic

- linux

  - . ci/
  - export PATH=$PATH:/usr/lib

    update: true
      - libvlc-dev
      - libvlccore-dev
      - vlc

  - Pharo64-8.0
  - Pharo64-9.0

  fast_finish: true
    - smalltalk: Pharo64-9.0


Extra configuration (export display)

Additionally to the libraries, an FFI project may need to access to a screen. It is the case of VLC that can spawn windows to display video. Travis comes with a solution to test GUI. We then only need to export the Display to port :99. For instance, with VLC, we add a script in the before_install step of Travis. It will execute the file ci/ that set up the display.

set -ev

# Setup display
export DISPLAY=:99.0

Use external resources with GitBridge

Finally, you may need external resources to test your FFI binding. For instance, in VLC, I need music and video files. I propose to add the needed resources in the github repository and to use them for the test. However, it is difficult to set up tests that work on your local filesystem, others filesystem, and with Travis.

Hopefully,  Cyril Ferlicot has developed GitBridge. This project allows one to access easily resources present in a local git repository. After adding GitBridge to the VLC FFI project, we can get a FileReference to the folder res of the VLC FFI project by executing the method res of the bridge

However, a last configuration set up is needed to use GitBridge with Travis. It consists on executing a Pharo script. To do so, we change the .smalltalk.ston to execute a specific file.

SmalltalkCISpec {
  #loading : [
    SCIMetacelloLoadSpec {
      #baseline : 'VLC',
      #directory : 'src'
  #preTesting : SCICustomScript {
    #path : 'res/ci/'


Finally, the file contains the script that will add to Iceberg the git project.

(IceRepositoryCreator new 
    location: '.' asFileReference;
    subdirectory: 'src';
    createRepository) register

And now you are done!