Tag Archives: software-development

When optimal matters

This last week I’ve been involved somehow in several discussions which, although not explicitly, talked about optimizations. In particular, premature optimizations. Of course, we all know they are evil. Do we? I’ll not discuss today on optimization techniques, but on what should an IT professional think about when thinking about optimizations in his programs.

The main arguments I heard this week of people supporting premature organizations were:

  • A guy using some technology X, should know the underlying details of X, or he will fail. Let’s say, if you are a Java programmer, you must not only know there is a GC, but also how it works.
  • A tech guy should be always conscious of the resources used. i.e., not to store a lot of objects in caches because memory is a finite resource.
  • Assumptions on what should be faster. Using or not a macro in C for example.

And these arguments are not even totally wrong. But they are not so totally true as they were stated.

Premature optimization is the root of all evil

You know Donald Knuth? This phrase (attributed to C.A.R. Hoare, btw) became famous because of this paper he authored. The interesting thing is that this phrase, when used, is taken out from context. The original phrase is:

Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%.

Being optimal is not all that matters

When writing software, running optimal is not the only variable to analyze. We want programs to be debuggeable, understandable, extendable, maintainable. And sometimes, optimal code can be ugly, and can be hard to change, or to fix bugs on it.

So, let’s look at this piece of code written for Pharo. This piece of code takes a string, splits the substrings taking the space character as separator, excludes the substrings that do not exist as symbols in the system, and then it converts them to symbols.

aString := 'aaaaa bbb Class cccc ddd'.
((aString splitOn: Character space)
    reject: [ :each| Symbol lookup: each  ])
    collect: #asSymbol

Kind of understandable piece of code. But what goes on behind scenes?

  • splitOn: creates a new (and temporary) collection. It also creates strings for each substrings and copy all the contents into those strings;
  • reject: iterates over the result of #splitOn:, creates a new (temporary again) collection;
  • collect: iterates over the result of #reject:, creates a new collection to put the results of #asSymbol

Finally, there are two intermediate collections that are discarded, some substrings are created by copying all the contents and finally discarded (cause we only care about the symbols). Yes, that is inefficient: lots of temporary allocations that could launch the GC, several iterations over collections… we could do better. Let’s see an alternative version Camillo Bruni (Rmod, Inria) suggested to improve in terms of speed and memory usage:

Array streamContents: [ :s|
    aString
        splitOn: Character space
        indicesDo: [ :start :end|
            aString asSymbolFrom: start to: end ifPresent: [ :symbol|
            s nextPut: symbol]]

This new version, which btw ends up with the same result, is pretty much more efficient:

  • Streaming on the result causes only one collection allocation without temporal ones;
  • Some special methods introduced into String to avoid extra collection allocation, and substring copies;
  • One collection means only one iteration :)

But wow, the code became much more complicated (given the simplicity of the example), and less object oriented. We do not manipule so easily the substrings by sending messages to them, we have instead the indices into the source string. Our code is much more aware of the problems we stated before, and recurring to lower level APIs to avoid them.

Now, extend these ideas to a whole large application. Hundreds or thousands of classes written this way. We write methods of tens (or hundreds, why not?) of lines of code to avoid message sends (and therefore method lookups), we avoid at the maximum object allocation and go for an if based solution… and soon we will have lots of duplicated code, stringy code everywhere… And yet I can tell you (just guessing :^) your program will not be tons more optimal. What? Now my code is so hard to maintain and not very much faster? Not cool…

Being optimal when optimal matters

So let’s say we have this function that takes 100.000 database rows, makes some calculations, and show a simple result to a user. It takes 1 second, which is a lot for a nowadays machine. But this function is used once per hour…

Now take the code that evaluates the bytecode that access an object’s field. It gets executed maybe some lots of thousands of times per second? So, if this operation starts to take 1 second… :)

Or take this application that stores data on background, but when restoring wants to be as fast as possible to give a really good user experience. Will you care how much it takes the storing operation?

Do we really have to spend a lot of time optimizing code that is almost not used? Or code that does not need to run that fast? Wait! My application runs ok, do I really have to optimize something?

As Knuth says, 97% of the code is not critical. Only 3% deserves to be optimized.

Understand when and where optimal matters

So now you know the key point (optimizing when it matters), and you understood it mattered in your case. Time to find that 3%. And it may be not so obvious…

Thanks engineers invented profiling!  Just look a bit around, there are tons of tools to help you understand what you’re doing wrong: where is memory allocated?, and of which type?, is the GC launched so often? is a time consuming function executed too many times? Profiling is a technique that should be on every software engineer tool-case.

The rules of optimization

As a conclusion, today I found this link I want to share about the rules of optimization. And I think they are a pretty good guideline. When you are thinking on making an optimization:

  • First time: Don’t do it!
  • Second time: Don’t do it yet!
  • Third time: Ok, but you first profile and measure, and then optimize

There is much to lose when only thinking on the optimal solution to a problem in terms of machine resources. Remember people’s time to understand the written code, to adapt it to new situations and to fix bugs on it is also a valuable resource.

Guille

Pharo 2.0 Released

Pharo Project

Aaaand, a new version of the Pharo project cames out. It ressembles the version 2.0 of this dynamically typed object-oriented programming language and environment. This release includes many cool stuff improving a lot the infrastructure of the system, adding new core libraries and lots of cleansing and improvements. Let’s make some remarks on this release.

Cool Development Tools, all by default

Pharo’s default browser is now Nautilus by Benjamin Van Ryseghem. Nautilus has lots of cool features, like an alternative Group view, a plugin architecture, and integration with Monticello, refactorings and the Critics browser. Yes! Now by default Pharo includes refactorings, since they are one of the cornerstones of the development activities. Critics browser is also included, so the code quality can only improve :).

Auto completion has also seen lots of changes: default completion is <enter>, press <tab> to complete word per word á la command line, and it has also been revisited to provide better and more meaningful results.

Finally, if you press <shift+enter> you’ll see on the right upper corner the Spotlight by Esteban Lorenzano. A simple but powerful way tool to quickly browse a class or method.

Boosted by NativeBoost and Fuel

Pharo wants to be fast. And that’s something NativeBoost and Fuel achieve. That’s why you can find them included by default in the system. NativeBoost (by Igor Stasenko) gives us the ability to execute machine code from the language side, and a new generation FFI with callbacks. Use it with caution :). Fuel, written by Mariano Martinez Peck and Martin Dias, is a cool object serializer focusing on fast deserialization (materialization), and the ability to serialize any kind of objects: Block closures? yes. Contexts? yes. Complete debuggers so we can restore them and debug failures in other environments? YES.

UI Front – Spec and Keymappings

Pharo 2.0 includes two new cool libraries on the UI front: Spec and Keymappings.

Spec is a framework, mainly work of Benjamin Van Ryseghem under the tutelage of Stéphane Ducasse, to build UI components declaratively. It puts its main focus on component reuse and ability to be composed. Spec was included into Pharo 2.0 and some tools were reimplemented to use it. How do you give it a try?

On the other side, Keymappings is a shortcut library mostly re-written by me (Guille Polito) to adapt it to Pharo. It’s main objective is to provide common shortcut semantics for desktop UIs, and remove hardcoded semantics spread all over the system. Pharo 2.0 includes Keymappings and has already replaced some users of the old-fashioned(harwiredandmessy cof cof) shortcut declaration by nice keymapping ones. On the documentation side, I owe it to you :). I promise to a nice tutorial-post this week!

System changes – System Announcements, RPackage, FileSystem, branded VM

On the internals of the system, the notification of system events was replaced by System Announcements, RPackage was introduced so the old and ugly packaging system can be slowly migrated, and the old FileDirectory was tackled down and all its usages were replaced by the new cool FileSystem library (already there in 1.4).

Also, the Pharo VM is now branded, and includes many fixes and bundled libraries (nativeBoost and SSL plugins, cairo, freetype). You should run your Pharo images on a Pharo VM, which you will identify by a nice Pharo icon ;).

And of course there are lots of other clean and cool new stuff to see like SSL, command line tools, non UI blocking notifications… A more detailed list is here. So take Pharo, have a look, enjoy, and give feedback. Remember that any contribution is valuable, as small as it looks.

Download Pharo
Pharo website
Joining and helping
Pharo By Example book (available as a free PDF)
Screencasts!
Reporting problems
Pharo vision document

Chaus, Guille

Bootstrapping: finding the missing link

A few months ago I got involved in some crazy project: bootstrapping Pharo. I took some existing code, played with it, hacked it, modified it, understood it. Now I think I have some idea of what is a bootstrap and what are it’s advantages. I’ll try to give a brief introduction to the project: what is it about, advantages, an overview of the current military secret results, and an insight of what is to come.

I recommend you to have look at my last post (the image dilemma) before reading.

This project is one of the ESUG projects supported this year by the google summer of code program.

What is a Bootstrap

The encyclopedic definition: A Bootstrap of system is a process that can generate the smallest subset of that system that may be used to reproduce the complete one.

I mean, you have an explicit process that can generate a the minimal version of your system.

Ok, easier: You kick yourself to get impulse and start from a better place :).

Then, bootstrapping software systems or languages normally means that you will somehow enhance the original process that created the system.

Some examples to clarify:

  • When your computer is turned on, some one has to bootstrap the little program able to load other programs :).
  • Generating a development environment with very basic tools will improve your work a lot (when you use your development environment).
  • C compiler is written in C. That means that It somehow compiles itself. Of course if you do not like assembly much, reading the C implementation is a great improvement.
  • Pharo implements traits and uses them in the core of the system to empower the design.
  • A big part of Pharo’s VM is written in smalltalk!

The need of bootstrapping Pharo

Did you know the image you download from the pharo website is the same one that comes from years ago? I mean, not the exactly same one, but a binary copy :). The fact is that years ago (yeah, ancient history) god created the first image and it started evolving, one little change after the other, to the Pharo we know today.

Now, as in evolution, we found in Pharo missing links.  We do not know how some object became the way it is. Or how it was initialized.  The code is simple nowhere, it’s a missing link. Also, as years passed, our ancestor became chaotic. It grew in many different uncontrolled and unordered ways. Since the Pharo Project started, one if it’s goals was cleaning this mess, but re-modularising and cleaning the system is a hard, long, and bothersome process.

The outputs and advantages of Bootstrapping Pharo will be:

  • getting tools to detect problems: bad dependencies, unexistent initializations, code that really do not work but was never executed before.
  • This initialization process will be explicit and open.
  • We will be able to start the next Pharo from scratch, and since we will be able to change this explicit process, our next generation Pharo will be cleaner and fancier. It will be able to acquire easily new features: namespaces/modules, security, remote tools, mirrors.
  • But also, since it will allow people to create a custom system, researchers will have an invaluable tool to fulfil their own purposes.  They will be able to experiment

Current status of the project

The project has already had a first output, which is the image writer.  The image writer is a little tool that traces a graph from an SmalltalkImage object, and deploys that graph into a .image file.  I’ll talk about this sub project in a future post.

The rest of the code is a military secret yet. Ok, I can give it to you, but you have to be responsible if it blows up on your face :).

The results of the project so far are:

  • It can create an Smalltalk image living inside another image.
  • This inner/guest image can be written in a new .image file.
  • With this approach a small kernel of 1.1MB has been reached.
  • SpaceTally runs and prints reports to understand how the space is distributed among objects.
  • A tool to detect every uninitialized class variable/class instance variable and references to unexistant globals was developed.

Soon we will have all these public on Pharo Jenkins server.

Next Steps

  • Jenkins jobs :)
  • Taking jenkins feedback to speed up the cleaning process
  • Remodularizaton to get even a smaller kernel
  • Maybe some little experiment to bootstrap MicroSqueak and learn from it
  • Bootstrap from source code.

I’ll try to keep you updated often.  BTW, any ideas, critics or suggestions are welcome. I’m not a gurú, I’m just learning, as everyone :).

Saludos!