Tag Archives: software

Recover a broken image with Oz

While hacking the VM the other day, I went into a very special situation: I got an image that crashed on startup. And I had stuff without commit!! So I started the crusade to recover it.

I know that some time ago, people like Sean De Nigris had the same problem and ended up hacking the VM to solve their problem. You can have an idea of what exactly happened by reading the original mailing list post.

But doing that required debugging the vm at the bytecode level (that is, debugging each bytecode internally) until I reach my point and solve my problem.

The concrete situation

The last thing I executed in my broken image was the following two statements in one doit:

Smalltalk snapshot: true andQuit: true.
someObject doSomethingThatCrashesTheVM.

So when my image was started, it executed all the startup code, returned to the context of the code above, and tried to send the doSomethingThatCrashesTheVM message to someObject, causing a crash.

I knew that Oz could do it, so I went into it.

The recovery environment

I needed to put my image in a place where I could fix it. And that place is an object space, which I am implementing in Oz. The idea is to put the broken image inside another image with an object space, so we can manipulate it freely, fix it, and restart it. With the Oz library and VM, creating the object space is as easy as:

objectSpace := Pharo20 loadFrom: 'broken.image'.

The diagnostic

Once we have the objectSpace object, we need to understand how to solve our problem. So I wondered around by getting the scheduler of the loaded image, and looking at the active process.

ps := (objectSpace specialObjectsArray at: 4) value asSchedulerMirror.
activeProcess := ps activeProcess.

And once we have the active process, we can have a look at its context and the method that is being executed. We can follow the sender chain until we reach the context with the problem:

cm := objectSpace
          fromRemoteCompiledMethod: activeProcess context sender method.
cm decompile ==> ' DoIt
	Smalltalk snapshot: true andQuit: true.
	^ someObject doSomethingThatCrashesTheVM.

Now that we are there, we have to fix it.

The medicine

There are many ways to manipulate the contexts and and processes to solve that. But there is also one that is really easy and simple.

We only need to change the someObject reference to nil. That way, we avoid the crash in the startup and obtain a debugger instead with a:

“Undefined Object does not understand doSomethingThatCrashesTheVM”

Nice! So, in order to do that, I needed to understand a bit how the code was compiled. I did it by sending a couple of messages to the compiled method, to understand its structure. Finally, I got the following conclusion: someObject was the first literal in the compiled method. And I fixed it:

association := cm literalAt: 4.
association instVarAt: 2 put: objectSpace nilObject.

I restarted the image in the object space (with the so far ad-hoc method):

[[ objectSpace giveChanceToRun. true ] whileTrue: [ ]] forkAt: 80.

So you make a loop and in each iteration you give the object space a window of time to run. And after that the broken image will appear before you (if you fixed it reasonably well) :).
Then, I saved it with another name, restored manually the changes file (because my object spaces solution does not handle yet :)) and voilá. I had my image back again, with all my objects there. I and could commit :).

Installing Pharo in many flavors

Buon giorno!

In the recent times, there appeared many many ways to leverage the installation and deploy of Pharo applications. These installation approaches enhance significantly the experience of using Pharo, by simplifying either dependency management with OS libraries, enabling to write deploy bash scripts or loading prebuilt images for any (and many) taste(s).

However, if you are not in the Pharo mailing lists, you probably have not heard about many of these installation mechanisms, and therefore, you cannot enjoy them. So, let’s summarize a bit some of these mechanisms, at least the ones I know. If you know some more, contact me so we can include it.

Manual download from the webpage

Downloading Pharo manually is the easiest but more primitive approach. Proceed to the download page [1] and download the flavor of Pharo you like the most. You will find in here the 1.3, 1.4 and 2.0 releases, plus the option to load the latest (still in development) version of Pharo 3.0.

Focusing on what is available for Pharo 2.0, you can either install

  • under the category “Pharo Installers”: a package specific for your operative system containing both virtual machine and the image with the runtime and development environment
  • under the category “Custom Downloads”: the possibility to download them by separate. This option is useful if you already have a virtual machine and only want a new image to play with.

[1] http://www.pharo-project.org/pharo-download

Manual download from the file server

In the Pharo file server[2] you will find available the virtual machine and image releases as well as other resources to download. You can use these urls to create your custom download scripts.

[2] http://files.pharo.org/

Virtual Machine PPA for Ubuntu linux

There is a PPA available for Ubuntu users (probably it works also for any distribution using apt-get package manager) which is in charge of downloading the virtual machine and its dependencies, simplifying its installation and deploy. We thank Damien Cassou for taking finally the initiative of creating the PPA!

#install the PPA repository
sudo add-apt-repository ppa:pharo/stable
sudo apt-get update

#install pharo vm core
sudo apt-get install pharo-vm-core

#install pharo vm for desktop (with graphical dependencies)
sudo apt-get install pharo-vm-desktop

ZeroConf scripts

The ZeroConf scripts[3] are already built bash scripts easing the download and installation of pharo. They are scripts served by get.pharo.org which can be parameterized for getting the pair vm/image you want.

Their usage, as written in the ZeroConf webpage can be resumed as

curl url | bash
#or if curl is not available:
wget -O- url | bash

where url is replaced by the formula vmVersion|imageVersion|vmVersion+imageVersion

For example, some valid usages of ZeroConf are

#downloading latest 3.0
curl get.pharo.org/alpha | bash

#downloading stable 2.0 + vm
curl get.pharo.org/20+vm | bash

#downloading latest non stable vm
curl get.pharo.org/vmLatest | bash

You can look for the valid values in the ZeroConf page [3]. These scripts are currently heavily used by the ci infrastructure of pharo. We thank Camillo Bruni for pushing this harder!

In fact, this is the way I download my own images right now, because the url is easy to memorize and using the terminal is pretty straightforward.

[3] http://get.pharo.org/

Pharo Launcher

The Pharo Launcher is an application to download and manage prebuilt and custom Pharo images. Below I paste the release notes from the first release:

“Erwan and I are proud to announce the first release of the Pharo
Launcher, a cross-platform application that

– lets you manage your Pharo images (launch, rename, copy and delete);
– lets you download image templates (i.e., zip archives) from many
different sources (Jenkins, files.pharo.org, and your local cache);
– lets you create new images from any template.

The idea behind the Pharo Launcher is that you should be able to
access it very rapidly from your OS application launcher. As a result,
launching any image is never more than 3 clicks away.

Download: https://ci.inria.fr/pharo-contribution/job/PharoLauncher/PHARO=30,VERSION=bleedingEdge,VM=vm/lastSuccessfulBuild/artifact/PharoLauncher.zip

Please report bugs on the ‘Launcher’ project at https://pharo.fogbugz.org

You can contribute to this project. All classes and most methods are
commented. There are unit tests. Please contribute!

Source code: http://www.smalltalkhub.com/#!/~Pharo/PharoLauncher
CI: https://ci.inria.fr/pharo-contribution/job/PharoLauncher

Pharo Launcher screenshot

Pharo Launcher screenshot

The Pharo launcher is an initiative of Erwan Douaille and Damien Cassou. And of course, you can contribute to it. In their release notes they added some points they would like to enhance in this project:

  • check if a template is already downloaded before downloading it
  • add a preference mechanism (for, e.g., quit after launch, definition of your own template groups, location of downloaded templates and images)
  • put the launcher in the Pharo Ubuntu package so that the launcher becomes a registered application of the system (https://launchpad.net/~pharo/+archive/stable)
  • make sure the pharo launcher does not load your personal scripts (like fonts and MC configuration)
  • add a toolbar to enhance the discoverability of the features (currently everything is in contextual menus)
  • make sure rename and copy actions propose default values
  • make sure no debugger pops up when a user press cancels or enter an invalid name
  • propose multiple kinds of sorting (last used, most frequently used, alphabetically on the name)
  • give some information about each template (build date, pharo version)

Conclusion

Pharo is growing, and getting sexy. And now you have easy deploy, and it will get only easier in the future. What are you waiting?

#Just do this!
curl get.pharo.org/20+vm | bash
./pharo-ui Pharo.image

When optimal matters

This last week I’ve been involved somehow in several discussions which, although not explicitly, talked about optimizations. In particular, premature optimizations. Of course, we all know they are evil. Do we? I’ll not discuss today on optimization techniques, but on what should an IT professional think about when thinking about optimizations in his programs.

The main arguments I heard this week of people supporting premature organizations were:

  • A guy using some technology X, should know the underlying details of X, or he will fail. Let’s say, if you are a Java programmer, you must not only know there is a GC, but also how it works.
  • A tech guy should be always conscious of the resources used. i.e., not to store a lot of objects in caches because memory is a finite resource.
  • Assumptions on what should be faster. Using or not a macro in C for example.

And these arguments are not even totally wrong. But they are not so totally true as they were stated.

Premature optimization is the root of all evil

You know Donald Knuth? This phrase (attributed to C.A.R. Hoare, btw) became famous because of this paper he authored. The interesting thing is that this phrase, when used, is taken out from context. The original phrase is:

Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%.

Being optimal is not all that matters

When writing software, running optimal is not the only variable to analyze. We want programs to be debuggeable, understandable, extendable, maintainable. And sometimes, optimal code can be ugly, and can be hard to change, or to fix bugs on it.

So, let’s look at this piece of code written for Pharo. This piece of code takes a string, splits the substrings taking the space character as separator, excludes the substrings that do not exist as symbols in the system, and then it converts them to symbols.

aString := 'aaaaa bbb Class cccc ddd'.
((aString splitOn: Character space)
    reject: [ :each| Symbol lookup: each  ])
    collect: #asSymbol

Kind of understandable piece of code. But what goes on behind scenes?

  • splitOn: creates a new (and temporary) collection. It also creates strings for each substrings and copy all the contents into those strings;
  • reject: iterates over the result of #splitOn:, creates a new (temporary again) collection;
  • collect: iterates over the result of #reject:, creates a new collection to put the results of #asSymbol

Finally, there are two intermediate collections that are discarded, some substrings are created by copying all the contents and finally discarded (cause we only care about the symbols). Yes, that is inefficient: lots of temporary allocations that could launch the GC, several iterations over collections… we could do better. Let’s see an alternative version Camillo Bruni (Rmod, Inria) suggested to improve in terms of speed and memory usage:

Array streamContents: [ :s|
    aString
        splitOn: Character space
        indicesDo: [ :start :end|
            aString asSymbolFrom: start to: end ifPresent: [ :symbol|
            s nextPut: symbol]]

This new version, which btw ends up with the same result, is pretty much more efficient:

  • Streaming on the result causes only one collection allocation without temporal ones;
  • Some special methods introduced into String to avoid extra collection allocation, and substring copies;
  • One collection means only one iteration :)

But wow, the code became much more complicated (given the simplicity of the example), and less object oriented. We do not manipule so easily the substrings by sending messages to them, we have instead the indices into the source string. Our code is much more aware of the problems we stated before, and recurring to lower level APIs to avoid them.

Now, extend these ideas to a whole large application. Hundreds or thousands of classes written this way. We write methods of tens (or hundreds, why not?) of lines of code to avoid message sends (and therefore method lookups), we avoid at the maximum object allocation and go for an if based solution… and soon we will have lots of duplicated code, stringy code everywhere… And yet I can tell you (just guessing :^) your program will not be tons more optimal. What? Now my code is so hard to maintain and not very much faster? Not cool…

Being optimal when optimal matters

So let’s say we have this function that takes 100.000 database rows, makes some calculations, and show a simple result to a user. It takes 1 second, which is a lot for a nowadays machine. But this function is used once per hour…

Now take the code that evaluates the bytecode that access an object’s field. It gets executed maybe some lots of thousands of times per second? So, if this operation starts to take 1 second… :)

Or take this application that stores data on background, but when restoring wants to be as fast as possible to give a really good user experience. Will you care how much it takes the storing operation?

Do we really have to spend a lot of time optimizing code that is almost not used? Or code that does not need to run that fast? Wait! My application runs ok, do I really have to optimize something?

As Knuth says, 97% of the code is not critical. Only 3% deserves to be optimized.

Understand when and where optimal matters

So now you know the key point (optimizing when it matters), and you understood it mattered in your case. Time to find that 3%. And it may be not so obvious…

Thanks engineers invented profiling!  Just look a bit around, there are tons of tools to help you understand what you’re doing wrong: where is memory allocated?, and of which type?, is the GC launched so often? is a time consuming function executed too many times? Profiling is a technique that should be on every software engineer tool-case.

The rules of optimization

As a conclusion, today I found this link I want to share about the rules of optimization. And I think they are a pretty good guideline. When you are thinking on making an optimization:

  • First time: Don’t do it!
  • Second time: Don’t do it yet!
  • Third time: Ok, but you first profile and measure, and then optimize

There is much to lose when only thinking on the optimal solution to a problem in terms of machine resources. Remember people’s time to understand the written code, to adapt it to new situations and to fix bugs on it is also a valuable resource.

Guille

Keymappings 101 – for Pharo 2.0

Pharo

Pharo 2.0 release includes the Keymappings library. Keymappings is a library for configuring shortcuts for the current UI library (Morphic). It models concepts like: shortcuts, key combinations, event bubbling. It is a very simple library which I’ll introduce gradually in this post.

Key combinations

Keymappings main task is it’s ability to associate a key combination to an action. So we have to build up those key combinations. The simplest key combination is the one that gets activated when a single key is pressed. We call these combinations single key combinations:

$a asKeyCombination. -> "single key combination for A key."
Character cr asKeyCombination. -> "single key combination for  key."

Although, usually key combinations get a bit more complex. It is very common to combine single keys with meta keys or modifiers. These meta keys or modifiers are the well known ctrl, shift, alt and command keys. To build a modified key combination we can do as follows:

$a ctrl. -> "a modified key combination for Ctrl+A"
$a ctrl shift. -> "a modified key combination for Ctrl+Shift+A"
It is important to notice that all key combinations are not case sensitive. It takes a and A characters as the same, since they are the same key.

Have you ever used emacs, Eclipse or Visual Studio? Then you probably know sequences of key combinations that launch one only action. Like Alt+Shift+X, T (to run JUnit tests in eclipse)? So keymappings can do that too:

$a command shift, $b shift. -> "key sequence (Cmd+Shift+A, Shift+B)"

Sometimes, you want to configure an action to be activated in two different cases. Those are Keymapping options, and get activated when one of the options gets activated:

$a command | $b command. -> "key combination (Cmd+A or Cmd+B)"

Finally, since Pharo is a cross platform system and it is important to provide a good user experience by with the most suitable shortcut layout, keymapping implements platform specific shortcuts, which get activated only when running in the specific platform:

$a command win | $b command unix. -> "Cmd+A on windows, but Cmd+B on unix"

Shortcut configurations

Now you know how to build key combinations for your purposes, you probably want to go to the action. Map those combinations to actions and make them work!

Single shortcut configuration

The simplest way to attach a shortcut to a morph is by sending him the #on:do: message. The first argument expected is a key combination and the second one is an action. In the example below, a workspace is created with two shortcuts:

  • when Cmd+Shift+A is pressed, the workspace is deleted
  • when Cmd+Shift+D is pressed, an information growl should appear yelling ‘this shortcut works!’
w:= Workspace new.
morph := w openLabel: 'keymapping test'.
morph on: $a shift command do: [ morph delete ].
morph on: $d shift command do: [ UIManager default inform: 'this shortcut works!' ].

Easy, huh? So let’s move on…

Shortcut categories

Sometimes you want to group and organize shortcuts in a meaningful way and apply them all together on a morph. Sometimes you want some morphs from different hierarchies to share the same group of shortcuts easily. Those groups of shortcuts are what keymapping calls Categories. A category is a group of shortcuts, so far (will change in the future) defined statically by using a keymap pragma on class side:

"defining a category"
SystemWindow class>>buildShortcutsOn: aBuilder
    <keymap>

A class side method marked as <keymap> will be called with a builder object, which can be used to define a named set of shortcuts:

SystemWindow class>>buildShortcutsOn: aBuilder
    <keymap>
    (aBuilder shortcut: #close)
        category: #WindowShortcuts
        default: $w ctrl | $w command mac
        do: [ :target | target delete ]
        description: 'Close this window'.

Shortcuts defined through the builder specify the name of the category they belong to, a default key combination, an action, and a description. All this metadata is there to be used as settings in the future.

Finally in order to get your morph handle those shortcuts you can use the #attachKeymapCategory: message as in:

w:= Workspace new.
morph := w openLabel: 'keymapping test'.
morph attachKeymapCategory: #Growling.

Bubbling

Keymappings’ shortcuts bubble to their parent if not handled, up until the main world morph. That has two main consequences:

  • Shortcuts for your application can be designed in a hierarchical way and;
  • Every time a shortcut does not work for you, it means that a morph below you has handled it ;) (be careful with text editors that handle loooots of key combinations)

Future work

So far, so good, but there is some plan on Keymappings for Pharo 3.0 development, which I can anticipate:

  • Some API changes: #on:do: can be confused with exception or announcement handling. #asShortcut will probably be properly renamed as #asKeyCombination. There is an inconsistency between the #command and #ctrl messages…
  • A lot of renames and new comments :)
  • Spread it all over the system
  • Make keymap categories first class objects, not any more a symbol ;)

à la prochain!
Guille